doi
stringlengths
0
570
pub_date
stringclasses
355 values
sections
listlengths
1
245
abstract
stringlengths
0
5.25k
title
stringlengths
0
228
figures
listlengths
0
130
authors
stringlengths
0
11.9k
references
listlengths
0
835
formulas
listlengths
0
679
2023-11-27
[ { "figure_ref": [ "fig_0", "fig_1" ], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b6", "b7", "b11", "b12", "b15", "b12", "b12" ], "table_ref": [], "text": "P EDESTRIAN trajectory prediction in a first-person view is a crucial technology in autonomous driving [1]- [7] because it is necessary to predict nearby pedestrians' actions and locations to avoid collision. Unlike pedestrian trajectory prediction in a bird-eye view camera (BEV) [8]- [12], which mainly focuses on trajectory coordinates, pedestrian trajectory prediction in a first-person view camera (FPV) [13]- [16] have richer annotated pedestrian character information at each time step, including their actions, gestures, genders, etc. In the category dimension, some pedestrian characters are irrelevant to trajectory modes (i.e., gender, age). In the temporal dimension, a determined type of character (such as the hand gesture) at time steps far from future (i.e., T = 1, 2) are irrelevant to trajectory modes. We name the character information irrelevant to trajectory modes as negative characters, which may negatively affect trajectory representation and lead to performance degradation.\nThey can help distinguish some similar trajectories that are indistinguishable by trajectory coordinates. For example, with the same historical trajectory and final goal, the future trajectory of a girl texting on the phone while walking could be different from a man pushing a stroller [13].\nPrevious work [13] has proved that pedestrian character information, i.e., action, can improve the accuracy of trajectory prediction by improving the trajectory coordinates representation. However, it ignores invalid or negative pedestrian character information, which could cause performance degradation, especially regarding multi-category characters and long-term prediction. As shown in Figure 1, in the category dimension, different kinds of pedestrian characters, such as hand gestures, gender, age, etc., influence performance differently due to whether they are relevant to future trajectory modes. Some types of pedestrian characters have positive effects on performance for trajectory prediction, while some may have adverse effects, for example, the prediction using \"gender\" or \"age\" character information in some datasets, which is proved by our ablation study. In the temporal dimension, pedestrian character information at different time steps has different influences on performance because of different time intervals to the future. Pedestrian characters at some distant time steps from the future may be useless or even have adverse effects, for example, time steps at T = 1, 2. In sum, some pedestrian characters can improve prediction accuracy to varying degrees, while some may cause negative influences. Hence, it is a meaningful task to make full use of valid pedestrian characters and eliminate negative ones.\nMotivated by the analysis above, we present a two-stream sparse-character-based network for pedestrian trajectory prediction. The proposed TSNet includes three key components: a sparse character representation stream, a trajectory representation stream, and a decoder module, as shown in Figure 2. To model the negative-removed pedestrian characters, we propose a novel sparse character graph to represent different effects of various pedestrian characters and remove side-effect ones in the sparse character representation stream. The sparse character graph includes a sparse temporal character graph and a sparse category character graph to model different effects in temporal and category dimensions, respectively. To construct the sparse temporal character graph without negative characters, we first learn the importance weights of a single character category at different time steps to form a mask. Then, by stacking the obtained masks of all categories together, we have a temporal mask of all characters. Finally, we use all characters and the temporal mask to generate a sparse temporal character graph without negative characters. Similarly, to construct the sparse category character graph without negative characters, we can first learn the importance weights of all character categories at a single time step to form a mask, and then obtain the category mask of all characters by stacking masks of all time steps together. Then we can represent the negative-removed characters in the category dimension by a sparse category character graph formed by all characters and the obtained category mask.\nBy our proposed sparse character graph, we can learn the negative-removed character features in the sparse character representation stream using the self-attention mechanism and convolutional networks. Meanwhile, we can learn the trajectory coordinates representation from observed trajectory in the trajectory representation stream by gated-recurrent unit (GRU) encoders. Subsequently, we use the learned negative-removed character features to improve the trajectory representation by concatenation. Finally, we decode the improved trajectory representation into the predicted trajectory in the decoder module. In summary, our contributions are four-fold:\n• To the best of our knowledge, this is the first work that models sparse pedestrian characters in pedestrian trajectory prediction to make full use of valid characters and eliminate negative ones. • We design a two-stream sparse-character-based pedestrian trajectory prediction network to improve the trajectory representation by negative-removed characters. • We propose a novel sparse character graph for trajectory prediction to model the negative-removed representation of pedestrian characters according to its relevance to trajectory modes. • Experiments on well-established first-person view datasets demonstrate that our approach significantly outperforms the state-of-the-art methods. We also conduct extensive ablation studies to validate the effectiveness of our contributions. The rest of this paper is organized as follows. We briefly review the related work in Section II. We present the technical details of our proposed method in Section III. Then extensive experiments and analysis are presented in Section IV. Finally, we conclude the paper in Section V." }, { "figure_ref": [], "heading": "II. RELATED WORKS", "publication_ref": [], "table_ref": [], "text": "In this section, we review previous works related to ours, which we categorize into three parts: (1) pedestrian trajectory prediction, (2) trajectory prediction using pedestrian characters and (3) graph structure learning." }, { "figure_ref": [], "heading": "A. Pedestrian Trajectory Prediction", "publication_ref": [ "b16", "b21", "b22", "b24", "b22", "b23", "b24", "b25", "b27", "b25", "b26", "b27", "b28", "b29", "b30", "b31", "b32", "b33", "b34", "b35", "b39" ], "table_ref": [], "text": "Pedestrian trajectory prediction in a bird-eye view (BEV) [17]- [22] observes the pedestrian trajectories from a static camera facing downwards. Earlier methods [23]- [25] only utilize trajectory coordinates as temporal sequences to make the prediction. Social-LSTM [23] forecasts the pedestrian trajectory by learning coordinates representation with Recurrent Neural Networks. Social-GAN [24] predicts the pedestrian trajectory by learning coordinates representation with the Generative Adversarial Network. SIT [25] models the human trajectory coordinates as a social interpretable tree to model the multimodal futures. However, people are not independent individuals in social environments, and the social interaction between people will also affect the future trajectories. Hence, it is essential to consider human-human interaction [26]- [28] in pedestrian trajectory prediction. Some methods model interactions between humans to improve the future forecasting. Social-BiGAT [26] models the human interaction by the graph attention network. AgentFormer [27] models the spatial interaction by the Transformer. GP-Graph [28] models the interaction of pedestrians as a group graph. Since trajectory information is limited, many methods utilize scene information to help the prediction. Sophie [29] models the scene with CNN [30], [31]. Y-net [32] models the scene using a series of U-nets [33]. End-to-End [34] models the scene using CNN and a convolutional long short-term memory [35] network.\nHowever, limited by the scene-invariant [36]- [40] property of the bird-eye view, the practical scene information that can be used is minimal. What is worse, due to limitations of the bird-eye view, the BEV camera can not capture pedestrians' character information, such as human appearances and individual actions, which are more valuable for learning. Hence, more recent studies have begun to focus on the firstperson view pedestrian trajectory prediction. In this work, we focus on improving the trajectory coordinates representation through the proposed sparse character graph in the first-person view camera." }, { "figure_ref": [], "heading": "B. Trajectory Prediction Using Pedestrian Characters", "publication_ref": [ "b12", "b15", "b40", "b41", "b12" ], "table_ref": [], "text": "Pedestrian Trajectory prediction in a first-person view (FPV) [13]- [16] observes the pedestrian trajectories from an on-board camera with ego-motion, which can capture more valuable pedestrian character information such as hand gesture, head motion, walking speed, age, gender, etc. Some works utilize individual visual features to improve pedestrian trajectory prediction in the first-person camera. DBN [41] models pedestrians' awareness from their faces by a Dynamic Bayesian Network to predict whether they cross the road. FPL [42] models pedestrian' body keypoint features by a convolutional neural network to improve the trajectory coordinate representation. A recent work, ABC [13], proposes an action-based contrastive learning method to utilize pedestrian action information to improve trajectory representation by classifying similar trajectories with different hunman actions, which achieves state-of-the-art performance. All the approaches demonstrate the effectiveness of pedestrian trajectory prediction with human characters.\nHowever, previous works ignore the invalid or negative pedestrian characters, which could influence trajectory representation and thus cause performance degradation. To address this issue, our proposed pedestrian sparse character graph can distinguish different effects of pedestrian characters in temporal and category dimensions. Accordingly, we can fully use valid pedestrian characters and eliminate invalid or negative ones." }, { "figure_ref": [], "heading": "C. Graph Structure Learning", "publication_ref": [ "b42", "b46", "b47", "b48", "b49", "b50", "b51" ], "table_ref": [], "text": "Graph Structure Learning (GSL) [43]- [47] aim to learn a graph structure which is more suitable for a certain task from the original data feature and explore the implicit and correlative information absent from the original data [48]. The existing graph structure learning methods can be grouped into two categories: traditional graph methods and deep graph methods. For traditional graph methods, Takai et al. [49] developed local and global clustering algorithms based on PageRank for hypergraphs. For deep graph methods, Social-STGCNN [50] propose a social spatio-temporal graph convolutional neural network to model the social interaction of pedestrians. RSBG [51] recursively extract social representations supervised by groupbased annotations and formulate them into a social behavior graph. GroupNet [52] propose a trainable multiscale hypergraph to capture both pair-wise and group-wise interactions at multiple group sizes.\nThe previous graph structure learning method aims to discover the implicit information. However, to fully use valid pedestrian characters and eliminate negative ones, this work aims to remove negative information which has side effects on the pedestrian trajectory representation. Hence, we propose the sparse character graph to remove such negative information by eliminating nodes with the adaptive masks. And the constructed sparse character graph can improve trajectory embedding without side-effect from negative characters." }, { "figure_ref": [], "heading": "III. METHOD", "publication_ref": [], "table_ref": [], "text": "In this section, we present our method for the first-person view pedestrian trajectory prediction, in which we focus on learning the negative-removed characters of pedestrians to improve the trajectory representation. We achieved this by constructing a two-stream sparse-character-based network, in which we model the past trajectory coordinates by the trajectory representation stream and then improve its learned representation by our proposed sparse character graph in the sparse character representation stream.\nSection III-A first gives the definition of the pedestrian trajectory prediction problem. Section III-B details the sparse character representation stream, including the construction and generation of temporal & category character graphs, temporal & category masks and sparse temporal & category graphs. Section III-C details the trajectory representation stream, including trajectory encoders and the CVAE sampler. Section III-D details the decoder module loss function of the overall framework." }, { "figure_ref": [], "heading": "A. Problem Formulation", "publication_ref": [], "table_ref": [], "text": "Given an observed trajectory X = [x 1 , x 2 , ..., x T obs ], pedestrian trajectory prediction aims to predict a future trajectory Y = [y T obs +1 , y T obs +2 , ..., y T pred ], where x and y are bounding box coordinates of the observed trajectory and future trajectory, respectively. T obs and T pred are the maximum time lengths of the observed trajectory and predicted trajectory, respectively. In addition, we also have pedestrian character labels of N categories for each observed trajectory X: S = [s 1 , s 2 , ..., s N ] at each time step, where\ns n = [s n 1 , s n 2 , ..., s n T obs ]. Note that s n t\nis the pedestrian character information of the n th category at the t th observed time step." }, { "figure_ref": [], "heading": "B. Sparse Character Representation Stream", "publication_ref": [ "b52", "b53", "b54", "b54", "b55", "b56", "b57", "b49", "b58" ], "table_ref": [], "text": "Temporal & Category Character Graphs. To model the pedestrian characters, we first construct the temporal and category character graphs. Given the pedestrian character input S, we can construct a set of temporal character graphs\nG tem n ∈ {G tem 1 , G tem 2 , .\n.., G tem N }, which represent the pedestrian character information of category n at all observed time steps. We defined the graph \nG tem n = (V n , E n , F n ), where V n = {v n i |i = {1, ...,\nG cat t = (V t , E t , F t ), where V t = {v i t |i = {1, ..., N }} is the vertex set of the graph G cat t . The pedestrian character information s i t is the attribute of v i t . E t = {e i,j t |i, j = {1, ..., N }} is the edge set of the graph G cat n . F t = {f i t |i = {1, ..., N }} ∈ R N ×D t\nf is feature matrix associated with each pedestrian character information v i t , where D t f is the feature dimension. We initialize all attributes e n i,j and e i,j t as one, assuming that each pair of vertices is connected by an edge.\nThe topological structures of graphs G tem n and G cat t are represented by the adjacency matrices A n = {a n;i,j |i, j = 1, ...T obs } ∈ R T obs ×T obs and A t = {a t;i,j |i, j = 1, ...N } ∈ R N ×N , respectively. The values of a n;i,j and a t;i,j in adjacency matrices A n and A t are initialized as follows:\na n;i,j = ||v n i -v n j || 2 , a t;i,j = ||v i t -v j t || 2 ,(1)\nwhere || * || 2 is L 2 -norm. The values of f n i and f i t in feature matrix F n and F t are defined as:\nf n i = ϕ(v n i , W n ), f i t = ϕ(v i t , W t ),(2)\nwhere ϕ(•, •) denotes linear transformation. W n and W t are the weights of the linear transformation.\nTemporal & Category Masks. To distinguish the different effects of pedestrian character in temporal and category dimensions, we need to learn the sparsity masks of the pedestrian character features F n and F t . We first compute the temporal character attention score matrix R t ∈ R H×T obs ×T obs by the multi-head self-attention mechanism [53], [54] as:\nQ tem i = ϕ(F n , W t Q ), K tem i = ϕ(F n , W t K ), O tem i = Softmax( Q tem i (K tem i ) T √ d t ), R t = Concat(O tem i ), i = 1, 2, ..., H,(3)\nwhere\nQ tem i ∈ R T obs ×D t q and K tem i ∈ R T obs ×D t k\nare the query and key, respectively. W t Q and W t K are the weights of the linear transformation. i is the index of H heads. O tem i ∈ R T obs ×T obs is the attention score of the i th attention head.\n√ d t = D t q\nis a scaled factor [55]. Similarly, we can compute category character attention score matrix R c ∈ R H×N ×N as:\nQ cat i = ϕ(F t , W c Q ), K cat i = ϕ(F t , W c K ), O cat i = Softmax( Q cat i (K cat i ) T √ d c ), R c = Concat(O cat i ), i = 1, 2, ..., H,(4)\nwhere\nQ cat i ∈ R N ×D c q and K cat i ∈ R N ×D c\nk are the query and key, respectively. W c Q and W c K are the weights of the linear transformation. i is the index of H heads. O cat i ∈ R T obs ×T obs is the attention score of the i th attention head.\n√ d c = D c q is a scaled factor [55].\nSince the multi-head attention scores come from different representation subspaces, we fuse it through a convolution network [56] and then adopt a sigmoid function to map the attention scores to mask values [0, 1], as follows:\nJ t = δ (Conv (R t , K)), J c = δ (Conv (R c , K)),(5)\nwhere J t ∈ R T obs ×T obs and J c ∈ R N ×N are the feature maps of R t and R c , respectively. δ(•) is a sigmoid function. K denotes the 1 × 1 convolution kernel.\nTo model the different pedestrian characters' effects and discard invalid or negative ones in pedestrian character graphs, we generate sparse masks M n and M t of pedestrian character graphs G tem n and G cat t by an element-wise threshold ξ. When the element in J t or J c is larger than ξ, we do not change its value, otherwise we set it to zero, as follows:\nM n = I(J t ≥ ξ), M t = I(J c ≥ ξ),(6)\nwhere I(a ≥ b) is an indicator function, which denotes that elements in a keep their values if the inequality holds, otherwise change their values to zero. Sparse Temporal & Category Character Graphs. Since we have obtained the sparse masks M n and M t which represent different pedestrian characters' effects, we can generate the sparse character features as follows:\nFn = Softmax(F n ⊙ M n ), Ft = Softmax(F t ⊙ M t ),(7)\nwhere Fn and Ft are the sparse temporal and sparse category character feature matrices, respectively. ⊙ denotes the elementwise multiplication. Hence, we can obtained the sparse temporal character graph Ĝtem n = (V n , E n , Fn ) and the sparse category character graph Ĝcat t = (V t , E t , Ft ). Then, we adopt a graph convolutional network (GCN) [57], [58] to obtain the high-level features of the sparse graphs. We first add identity matrices to the adjacency matrices A n and A t following previous methods [50], [59], as follows:\nA ′ n = A n + I, A ′ t = A t + I.(8)\nSecondly, we stack A ′ n from all pedestrian character categories as Ân = {A ′ 1 , A ′ 2 , ..., A ′ N } ∈ R N ×T obs ×T obs and stack A ′ t from all time steps as Ât = {A ′ 1 , A ′ 2 , ..., A ′ T obs } ∈ R T obs ×N ×N . Then, we stack feature matrices of the l th layer as\nF (l) n = {F (l) 1 , F (l) 2 , ..., F (l) N } ∈ R N ×D n f ×T obs and F (l) t = {F (l) 1 , F (l) 2 , ..., F(l)\nT obs } ∈ R T obs ×D t f ×N , respectively. We also stack node degree matrices D n = {D 1 , D 2 , ..., D N } and D t = {D 1 , D 2 , ..., D T obs }, respectively. Finally, we have output features\nF (l+1) n ∈ R N ×D n f ×T obs and F (l+1) t ∈ R T obs ×D t\nf ×N of the (l + 1) th layer of the GCN, as follows:\nF (l+1) n = σ(D -1 2 n Ân D 1 2 n F (l) n W (l) n ), F (l+1) t = σ(D -1 2 t Ât D 1 2 t F (l) t W (l) t ),(9)\nwhere σ(•) is a non-linearity activation function." }, { "figure_ref": [], "heading": "C. Trajectory Representation Stream", "publication_ref": [ "b59" ], "table_ref": [], "text": "The trajectory representation stream is constructed to obtain observed trajectory coordinates representations, which can be improved by the learned sparse character representations. To generate multimodal trajectories, we utilize the CVAE sampler to select multiple goals, which guide the multimodal pedestrian trajectory prediction.\nTrajectory Encoders. Human trajectories coordinates are time series information. To obtain the high dimensional representation, we use gated-recurrent unit (GRU) [60] encoders to extract trajectory features. The past trajectory X is processed by a GRU encoder to obtain the past trajectory feature F p . Meanwhile, the ground truth Y is processed by a GRU encoder to obtain the future trajectory feature F g , which only exists in the training process, as follows:\nF p = g enc (X), F g = g enc (Y ),(10)\nwhere g enc (•) denotes the GRU encoder. CVAE Sampler. To generate multimodal trajectories, we use CVAE to generate multimodal pedestrian trajectories by sampling variables in the latent space. We first introduce a latent variable Z ∼ N (µ Z , σ Z ). The probability density function p(Z|X) predicts Gaussian parameters µ Zp and σ Zp by F p . The probability density function q(Z|X, Y ) predicts Gaussian parameters µ Zq and σ Zq by both F p and F g , as follows:\nq(Z|X, Y ) = N (µ Zq , σ Zq ), p(Z|X) = N (µ Zp , σ Zp ),(11)\nIn the training process, we can sample multiple Z from q(Z|X, Y ) to generate multiple goals, and then generate multimodal trajectories. However, we do not have future trajectories Y during inference. Hence, we optimize p(Z|X) to approach q(Z|X, Y ) by the Kullback-Leibler divergence (KLD) loss, thus we can sample multiple Z from q(Z|X) to generate multimodal trajectories during inference. The KLD loss is shown as follows:\nL KLD = p(z) × [log( p(z) q(z) )],(12)\nTherefore, in the training process, we can use F p and F g to train the CVAE to generate latent variables Z, which generate multimodal goals jointly with F p . In the inference process, we can only use F p to generate latent variables Z, which are concatenated with F p to generate multimodal goals." }, { "figure_ref": [], "heading": "D. Decoder and Loss Function", "publication_ref": [ "b12", "b13", "b60", "b61", "b64" ], "table_ref": [], "text": "To obtain the multi-modal trajectory predictions, we follow the previous best-of-K [13], [14], [61], [62] strategy by sampling K latent variables Z by CVAE and generating corresponding goals G. Then, we use a GRU decoder to predict the multi-modal trajectories Ŷ by the learned sparse pedestrian character features and the multi-modal goals, as follows:\nŶ = g dec (ϵ(F (l+1) n ⊕ F (l+1) t ) ⊕ F p ⊕ ϵ(G)),(13)\nwhere ϵ denotes an MLP layer. ⊕ is the concatenation function and g dec (•) denotes the GRU decoder.\nOur model is trained end-to-end by minimizing the loss function L T SN et as:\nL T RJ = min k∈K || Ŷ (k) -Y || 2 , L GL = min k∈K || Ĝ(k) -G gt || 2 , L T SN et = L T RJ + L GL + L KLD ,(14)\nwhere L T RJ means the trajectory L 2 -norm loss for the complete training process. L GL means the L 2 -norm loss between generated goals and the ground truth. L KLD means the Kullback-Leibler divergence (KLD) loss for CVAE. G gt is the ground truth of goals.\nFinal Trajectory Clustering. The multimodal trajectory prediction aims to predict K possible trajectories to cover the ground truth. Our proposed network can generate multimodal trajectories with a CVAE module. However, when only limited samples are generated from the latent distribution, bias issues may arise because some samples may fall into low-density regions or too many samples may be gathered in high-density regions [65]. Hence, we adopt the final trajectory clustering strategy to make the samples evenly distributed in each region of the latent distribution. Specifically, we first sample C (C >> K) latent variables Z by the CVAE module and then generate C corresponding goals G. Subsequently, we predict C trajectories Ŷ conditioned by C generated goals. Finally, we cluster the number of predicted trajectories from C into K as our final multimodal predictions. Experiments show that the final trajectory clustering can improve performance, but it could cause an increment in the inference time. We believe that an appropriate value of C can be chosen to achieve a balance between the performance and the inference time." }, { "figure_ref": [], "heading": "IV. EXPERIMENTAL ANALYSIS", "publication_ref": [ "b62", "b65" ], "table_ref": [], "text": "We perform extensive experiments and compare experimental results with previous works on PIE [63] and JAAD [66]. Moreover, we conduct comprehensive ablation studies to verify our main contributions." }, { "figure_ref": [], "heading": "A. Datasets", "publication_ref": [ "b62", "b12", "b62", "b65", "b12", "b62" ], "table_ref": [], "text": "PIE Dataset. The Pedestrian Intention Estimation (PIE) dataset [63] is a large-scale first-person view dataset that is captured from dash cameras annotated at 30Hz. The PIE dataset contains 293,437 annotated frames and 1,842 pedestrians with annotated pedestrian character classes such as action, gesture, cross, look, age, gender, etc. For example, the pedestrian character class \"action\" has annotations for walking and standing, and the pedestrian character class \"age\" has annotations for child, young, adult and senior. We use five pedestrian character classes, i.e., action, gesture, look, gender and age, in our work. We use the same training and testing splits as [13], [63]. ST JAAD Dataset. The Joint Attention for Autonomous Driving (JAAD) dataset [66] is a first-person view dataset that is captured from dash cameras annotated at 30Hz. The JAAD dataset contains 82,032 annotated frames and 2,786 pedestrians, where 686 of them have pedestrian character annotations as the PIE dataset. For pedestrians with no pedestrian character annotations, we manually annotate them as \"unknown\". We use five same pedestrian character classes as the PIE dataset in our work and the same training and testing splits as [13], [63]." }, { "figure_ref": [], "heading": "B. Experimental Setup", "publication_ref": [ "b66", "b12", "b13", "b62", "b65", "b12", "b13", "b62", "b65", "b12", "b13" ], "table_ref": [], "text": "Implementation Details. The number of layers and the embedding dimension of self-attention in the sparse character representation stream are 1 and 64 respectively. The number of layers in GCN is 1. The threshold ξ is set to 0.5. The embedding dimension of encoders in the trajectory representation stream and the decoder in the decoder module are all set to 256. The hyper-parameter C of final trajectory clustering is set to 100. Our model is trained with batch size 128, learning rate 0.001, and an exponential LR scheduler [67]. Following previous methods [13], [14], [63], [66], we observed 0.5 seconds and predict 0.5, 1.0 and 1.5 seconds respectively. We use the best-of-20 strategy for the multimodal prediction as previous methods [13], [14], [63], [66]. The entire framework is trained on GTX-3090 GPU. All models are implemented with PyTorch.\nEvaluation Metrics. Following commonly accepted metrics [13], [14], we evaluate our method using: ( " }, { "figure_ref": [], "heading": "C. Quantitative Analysis", "publication_ref": [ "b62", "b62", "b63", "b15", "b62", "b13", "b12", "b62", "b65", "b12", "b12", "b12", "b12" ], "table_ref": [ "tab_2", "tab_2" ], "text": "As shown in Table I, we compare our method with seven first-person view trajectory prediction models, including Linear [63], LSTM [63], B-LSTM [64], FOL-X [16], PIE traj [63], BiTraP [14], ABC+ [13], on PIE [63] and JAAD [66] datasets, where ABC+ [13] achieves the best performance among all state-of-the-art methods. The comparison with state-of-the-art methods indicates that our method significantly outperforms all other approaches on PIE and JAAD datasets. Compared with ABC+ [13], i.e., the best method using pedestrian characters without sparsity, our method surpasses it by 21% (C-ADE) and 30% (C-FDE) on PIE and 16% (C-ADE) and 20% (C-FDE) on JAAD, which indicates the superiority of our method.\nMoreover, the results indicate that our method has better performance on long-term prediction, which is proved by ADE results at different time steps in Table I. Compared with ABC+ [13] on the PIE dataset, our method improves the performance by 6%, 10% and 17% on 0.5s, 1.0s and 1.5s, respectively. Compared with ABC+ [13] on the JAAD dataset, our method improves the performance by 5% and 12% on 1.0s and 1.5s, respectively. The underlying reason could be that our method removes temporal side-effect information due to long time distances from the future." }, { "figure_ref": [], "heading": "D. Ablation Study", "publication_ref": [ "b12", "b13", "b13" ], "table_ref": [ "tab_3", "tab_5", "tab_7" ], "text": "In this section, we conduct five ablation experiments. Firstly, we remove each component of our proposed network to demonstrate their contributions. Secondly, we conduct different best-of-K prediction to demonstrate the adaptability of our proposed method. Thirdly, we replace multi-category pedestrian characters with diverse single-category characters to show varying representation abilities of different categories' pedestrian character information. Fourthly, we change the mask threshold value ξ to evaluate the performance of different degrees of sparsity. Finally, we analyze the effectiveness of the final trajectory clustering.\nContribution of Each Component. As illustrated in Table II and Table III, we evaluate the contributions of three components in our network, i.e., (1) ST denotes the sparse temporal character graph; (2) SC denotes the sparse category character graph; (3) FTC denotes the final trajectory clustering strategy. The experiment results show that each component can lead to performance improvement. Specifically, compared with the baseline, i.e., the method without all three components, adding the sparse temporal character graph can improve performance by 9% (C-ADE) and 14% (C-FDE) on the PIE dataset and 3% (C-ADE) and 3% (C-FDE) on the JAAD dataset; adding the sparse category character graph can improve performance by 6% (C-ADE) and 3% (C-FDE) on the PIE dataset and 4% (C-ADE) and 4% (C-FDE) on the JAAD dataset; adding all two sparse character graphs can improve Different Best-of-K Prediction Previous approaches commonly use Best-of-K (K = 20) as the quantified metric of multimodal trajectory prediction. To further validate the adaptability and effectiveness of our proposed TSNet, we conduct an experiment on various best-of-K predictions with K = 5, 10, 15 on PIE and JAAD datasets as shown in Table IV. Due to ABC [13] have not released the code, we compare our method with the second-best performance approach Bitrap [14]. Experiments results show that we outperform Bitrap [14] for different K, which indicates the adaptability and effectiveness of our proposed method. The baseline denotes the model without using pedestrian character information. Specifically, on the PIE dataset, the best performance is achieved on C-ADE and C-FDE when using \"age\" and \"gender\", respectively. However, experiment results achieve the worst performance and even decrease compared with the baseline when using \"look\". On the JAAD dataset, the best performance is achieved on C-ADE and C-FDE when using \"action\", and it causes a performance reduction compared with the baseline when using \"gender\" and \"gesture\". The result indicates that different categories of pedestrian characters have different representation abilities due to their different relevance to future modes. Some kinds of pedestrian characters improve prediction accuracy to varying degrees, while some may cause adverse influence. Hence, it is essential to remove the adverse pedestrian characters." }, { "figure_ref": [], "heading": "Analysis of Mask Threshold. As illustrated in Table VII", "publication_ref": [], "table_ref": [ "tab_14", "tab_15" ], "text": "and Table VIII, we evaluate the mask threshold ξ with four different values, including 0, 0.25, 0.50 and 0.75, on PIE and JAAD datasets. The mask threshold ξ means that we will eliminate the pedestrian characters when their scores are lower than the mask threshold value. ξ = 0 means that we do not eliminate any pedestrian character information. When ξ = 0.5, experiment results achieve the best performance on both PIE and JAAD datasets. However, ξ = 0 achieves the worst result on the PIE dataset and ξ = 0.25 achieves the worst result on the JAAD datasets. We believe that the choice of the mask threshold value depends on the difference of various datasets, including different pedestrian character categories, annotation errors, and the ego-motion. IX and Table X, we analyze to choose different numbers of samples C, which are clustered into K (K = 20) multimodal prediction in PIE and JAAD datasets. Experiment results show that the clustering post-process benefits the performance. The performance increases when the number of samples C increasing from 20 to 100. Moreover, the performance improvement rate decreases as the value of C increases. When C > 100, the performance is almost unchanged. We choose C = 100 in this work." }, { "figure_ref": [], "heading": "Analysis of Final Trajectory Clustering. As illustrated in Table", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2", "fig_3", "fig_4", "fig_3" ], "heading": "E. Qualitative Analysis", "publication_ref": [ "b12", "b13" ], "table_ref": [], "text": "Sparse Character Visualization. We show the sparse temporal characters and sparse category characters visualization on PIE and JAAD datasets in Figure 3. The two visualized trajectories are randomly selected from PIE and JAAD datasets, respectively. Characters are removed for their mask values lower than mask threshold ξ. The sparse temporal characters shows that some pedestrian characters at time steps far from prediction are invalid. The sparse temporal characters shows that some pedestrian characters in \"Look\" categories are invalid in the PIE dataset and some pedestrian characters in \"Gender\" and \"Age\" categories are invalid in the JAAD dataset. The sparse character visualization demonstrate that our method can remove negative pedestrian characters to improve the trajectory prediction.\nTrajectory Prediction Visualization. We show the visualization results on PIE and JAAD datasets in Figure 4 and Figure 5, respectively. The visualization results are set to predict twenty possible trajectories following previous approaches [13], [14]. We observe that the multi-modal trajectories predicted by our method can sufficiently cover the ground truth in various urban and rural traffic scenarios. Moreover, comparing the top two figures of the first column in Figure 4, both figures have similar observation histories but different future trajectories, i.e., the ground truth of the pedestrian trajectory in the top figure is a sharp turn, while the ground truth of the pedestrian trajectory in the bottom figure is going straight. Our method can make correct predictions in this case, which indicates that our learned sparse pedestrian character features can help to distinguish different trajectory modes, which can not be distinguished only using trajectory coordinates.\nV. CONCLUSION In this paper, we introduce a two-stream sparse-characterbased network that leverages negative-removed pedestrian characters to enhance the representation of trajectory coordinates. Additionally, we propose a novel sparse character graph, which consists of sparse temporal and sparse category graphs, to model the different effects of various pedestrian characters and eliminate invalid information by adaptive masks in the temporal and category dimensions, respectively. Through extensive experimental evaluations, we demonstrate that our method achieves significant performance improvements compared to previous state-of-the-art approaches. Furthermore, our ablation results reveal that different pedestrian characters exhibit varying representation abilities based on their relevance to future trajectory modes, highlighting the importance of sparsity in eliminating negative pedestrian characters. Moreover, our qualitative results illustrate the successful prediction of future trajectories in diverse urban and rural traffic scenarios. The observed improvements in our method can be attributed to the enhanced representation of pedestrian characters achieved through the removal of negative characters." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Manuscript received xxx; revised xxx; accepted xxx. Date of publication xxx; date of current version xxx. This work was supported in part by National Key R&D Program of China under Grant 2021YFB1714700, NSFC under Grants 62088102 and 62106192, Natural Science Foundation of Shaanxi Province under Grants 2022JC-41 and 2021JQ-054, China Postdoctoral Science Foundation under Grant 2020M683490, and Fundamental Research Funds for the Central Universities under Grants XTR042021005 and XTR072022001." } ]
Pedestrian trajectory prediction in a first-person view has recently attracted much attention due to its importance in autonomous driving. Recent work utilizes pedestrian character information, i.e., action and appearance, to improve the learned trajectory embedding and achieves state-of-the-art performance. However, it neglects the invalid and negative pedestrian character information, which is harmful to trajectory representation and thus leads to performance degradation. To address this issue, we present a two-stream sparse-character-based network (TSNet) for pedestrian trajectory prediction. Specifically, TSNet learns the negative-removed characters in the sparse character representation stream to improve the trajectory embedding obtained in the trajectory representation stream. Moreover, to model the negative-removed characters, we propose a novel sparse character graph, including the sparse category and sparse temporal character graphs, to learn the different effects of various characters in category and temporal dimensions, respectively. Extensive experiments on two first-person view datasets, PIE and JAAD, show that our method outperforms existing state-of-theart methods. In addition, ablation studies demonstrate different effects of various characters and prove that TSNet outperforms approaches without eliminating negative characters.
Sparse Pedestrian Character Learning for Trajectory Prediction
[ { "figure_caption": "Fig. 1 .1Fig. 1. Illustration of negative characters in category and temporal dimensions.In the category dimension, some pedestrian characters are irrelevant to trajectory modes (i.e., gender, age). In the temporal dimension, a determined type of character (such as the hand gesture) at time steps far from future (i.e., T = 1, 2) are irrelevant to trajectory modes. We name the character information irrelevant to trajectory modes as negative characters, which may negatively affect trajectory representation and lead to performance degradation.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. An overview of our proposed method. Our method consists of three key components: (a) A sparse character representation stream, which learns the sparse character features from the pedestrian character inputs. (b) A trajectory representation stream, which encodes the trajectory coordinates features. (c) A decoder module, which decodes the concatenation feature from (a) and (b).", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Examples of visualization from the PIE and JAAD datasets showing the sparse character of pedestrian. White blocks denotes removed characters. ξ denotes the mask threshold.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Examples of visualization from the PIE dataset showing the multimodality in the trajectory prediction space. Blue bounding boxes are observed trajectories. Green bounding boxes are the multimodal predictions. Red bounding boxes refer to the ground truth of future trajectories.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. Examples of visualization from the JAAD dataset showing the multimodality in the trajectory prediction space. Blue bounding boxes are observed trajectories. Green bounding boxes are the multimodal predictions. Red bounding boxes refer to the ground truth of future trajectories.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "QUANTITATIVE RESULTS ON TWO PUBLIC BENCHMARK DATASETS PIE AND JAAD. ALL APPROACHES INPUT 15 FRAMES AND OUTPUT 45 FRAMES. OUR TSNET SIGNIFICANTLY OUTPERFORMS THE OTHER STATE-OF-THE-ART METHODS. THE LOWER THE BETTER.", "figure_data": "PIEJAADMethodADEC-ADEC-FDEADEC-ADEC-FDE0.5s1.0s1.5s1.5s0.5s1.0s1.5s1.5sLinear [63]12347713659503983233857230315656111LSTM [63]1723309118373352289569155814735766B-LSTM [64]1012968558113259159539153514475615FOL-X [16]471835845462303147484137412904924PIE traj [63]582006365962477110399128011834780BiTraP [14]2348102812613894222177565ABC+ [13]163887651914089189145409TSNet153473511334184166121325ST SC FTCADE 0.5s 1.0s 1.5sC-ADE C-FDE 1.5s✕✕✕184511185285✓✕✕17429877245✕✓✕174310380276✓✓✕16409672234✓✓✓15347351133", "figure_id": "tab_2", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "OF EACH COMPONENT ON THE PIE DATASET. THE LOWER THE BETTER.", "figure_data": "", "figure_id": "tab_3", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "OF EACH COMPONENT ON THE JAAD DATASET. THE", "figure_data": "", "figure_id": "tab_5", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "1) Bounding Box Average Displacement Error (ADE), which denotes the mean square error (MSE) distance of bounding box for the prediction", "figure_data": "PIEJAADKMethodADEC-ADEC-FDEADEC-ADEC-FDE0.5s1.0s1.5s1.5s0.5s1.0s1.5s1.5s15Bitrap [14] TSNet17 1646 37119 8294 59343 17645 42113 90272 183221 137736 38710Bitrap [14] TSNet19 1758 45164 110137 86541 29549 44140 100367 219313 1711128 5295Bitrap [14] TSNet27 22111 83378 259345 2321506 95864 51238 144738 379675 3292710 1221", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "BEST-OF-K PREDICTION ON THE PIE AND JAAD DATASET. ALL APPROACHES INPUT 15 FRAMES AND OUTPUT 45 FRAMES. THE LOWER THE", "figure_data": "BETTER.Dataset Character C-ADE (1.5s) C-FDE (1.5s)Baseline85285Action79258PIEGesture80267Look86310Gender76245Age75255", "figure_id": "tab_7", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "OF DIFFERENT PEDESTRIAN CHARACTER CATEGORIES ON THE PIE DATASET. BOLD INDICATES THE BEST PERFORMANCE. UNDERLINE INDICATES THE WORST PERFORMANCE. THE LOWER THE BETTER.", "figure_data": "Dataset Character C-ADE (1.5s) C-FDE (1.5s)Baseline177542Action161502JAADGesture185577Look174539Gender202638Age168509", "figure_id": "tab_8", "figure_label": "V", "figure_type": "table" }, { "figure_caption": "OF DIFFERENT PEDESTRIAN CHARACTER CATEGORIES ON THE JAAD DATASET. BOLD INDICATES THE BEST PERFORMANCE. UNDERLINE INDICATES THE WORST PERFORMANCE. THE LOWER THE BETTER.", "figure_data": "and the ground truth; (2) Bounding Box Center ADE (C-ADE),which denotes the MSE distance of bounding box center forthe prediction and the ground truth; (3) Bounding Box FinalDisplacement Error (FDE), which denotes the MSE distancebetween the destination bounding box for the prediction and theground truth; (4) Bounding Box Center FDE (C-FDE), whichdenotes the MSE distance between the destination boundingbox center for the prediction and the ground truth. The errorof the bounding box is calculated using the upper-left andlower-right coordinates.", "figure_id": "tab_9", "figure_label": "VI", "figure_type": "table" }, { "figure_caption": "STUDY OF THRESHOLD ξ ON THE PIE DATASET. BOLD INDICATES THE BEST PERFORMANCE. THE LOWER THE BETTER.", "figure_data": "DatasetξADE 0.5s 1.0s 1.5sC-ADE C-FDE 1.5s04187 176131359JAAD0.25 41 0.50 4189 188 84 166144 121441 3250.75 4292 189143396", "figure_id": "tab_11", "figure_label": "VII", "figure_type": "table" }, { "figure_caption": "STUDY OF THRESHOLD ξ ON THE JAAD DATASET. BOLD INDICATES THE BEST PERFORMANCE. THE LOWER THE BETTER.", "figure_data": "performance by 15% (C-ADE) and 18% (C-FDE) on the PIEdataset and 12% (C-ADE) and 13% (C-FDE) on the JAADdataset. The results indicate the importance of both temporalcharacter sparsity and category character sparsity for pedestriantrajectory prediction.", "figure_id": "tab_12", "figure_label": "VIII", "figure_type": "table" }, { "figure_caption": "STUDY OF FINAL TRAJECTORY CLUSTERING ON THE PIE DATASET. THE LOWER THE BETTER.", "figure_data": "Dataset Samples CADE 0.5s 1.0s 1.5s 1.5s C-ADE C-FDE2043 96 2051574694042 88 179134375JAAD6041 85 1691243388041 86 16912432910041 84 166121325", "figure_id": "tab_14", "figure_label": "IX", "figure_type": "table" }, { "figure_caption": "STUDY OF FINAL TRAJECTORY CLUSTERING ON THE JAAD DATASET. THE LOWER THE BETTER. Impacts of Different Pedestrian Character Information. As illustrated in Table V and Table VI, we conduct experiments on using different single-category character information, including action, gesture, look, gender and age, to demonstrate that different categories of pedestrian characters have different relevance to future modes on PIE and JAAD datasets.", "figure_data": "", "figure_id": "tab_15", "figure_label": "X", "figure_type": "table" } ]
Yonghao Dong; Le Wang; Sanpin Zhou
[ { "authors": "Q Sun; X Huang; J Gu; B C Williams; H Zhao", "journal": "", "ref_id": "b0", "title": "M2i: From factored marginal trajectory prediction to interactive prediction", "year": "2022" }, { "authors": "X Ren; T Yang; L E Li; A Alahi; Q Chen", "journal": "", "ref_id": "b1", "title": "Safety-aware motion prediction with unseen vehicles for autonomous driving", "year": "2021" }, { "authors": "S Wen; H Wang; D Metaxas", "journal": "", "ref_id": "b2", "title": "Social ode: Multi-agent trajectory forecasting with neural ordinary differential equations", "year": "2022" }, { "authors": "J Gu; C Sun; H Zhao", "journal": "", "ref_id": "b3", "title": "Densetnt: End-to-end trajectory prediction from dense goal sets", "year": "2021" }, { "authors": "Y Zhang; W Wang; W Guo; P Lv; M Xu; W Chen; D Manocha", "journal": "", "ref_id": "b4", "title": "D2-tpred: Discontinuous dependency for trajectory prediction under traffic lights", "year": "2022" }, { "authors": "Y Wang; S Chen", "journal": "IEEE Transactions on Multimedia", "ref_id": "b5", "title": "Multi-agent trajectory prediction with spatiotemporal sequence fusion", "year": "2021" }, { "authors": "H Hu; Q Wang; Z Zhang; Z Li; Z Gao", "journal": "Pattern Recognition", "ref_id": "b6", "title": "Holistic transformer: A joint neural network for trajectory prediction and decision-making of autonomous vehicles", "year": "2023" }, { "authors": "L.-W Tsao; Y.-K Wang; H.-S Lin; H.-H Shuai; L.-K Wong; W.-H Cheng", "journal": "", "ref_id": "b7", "title": "Social-ssl: Self-supervised cross-sequence representation learning based on transformers for multi-agent trajectory prediction", "year": "2022" }, { "authors": "A Monti; A Porrello; S Calderara; P Coscia; L Ballan; R Cucchiara", "journal": "", "ref_id": "b8", "title": "How many observations are enough? knowledge distillation for trajectory forecasting", "year": "2022" }, { "authors": "C Wong; B Xia; Z Hong; Q Peng; W Yuan; Q Cao; Y Yang; X You", "journal": "", "ref_id": "b9", "title": "View vertically: A hierarchical network for trajectory prediction via fourier spectrums", "year": "2022" }, { "authors": "P Dendorfer; S Elflein; L Leal-Taixé", "journal": "", "ref_id": "b10", "title": "Mg-gan: A multi-generator model preventing out-of-distribution samples in pedestrian trajectory prediction", "year": "2021" }, { "authors": "G Chen; J Li; N Zhou; L Ren; J Lu", "journal": "", "ref_id": "b11", "title": "Personalized trajectory prediction via distribution discrimination", "year": "2021" }, { "authors": "M Halawa; O Hellwich; P Bideau", "journal": "", "ref_id": "b12", "title": "Action-based contrastive learning for trajectory prediction", "year": "2022" }, { "authors": "Y Yao; E Atkins; M Johnson-Roberson; R Vasudevan; X Du", "journal": "IEEE RAL", "ref_id": "b13", "title": "Bitrap: Bi-directional pedestrian trajectory prediction with multi-modal goal estimation", "year": "2021" }, { "authors": "L Neumann; A Vedaldi", "journal": "", "ref_id": "b14", "title": "Pedestrian and ego-vehicle trajectory prediction from monocular camera", "year": "2021" }, { "authors": "Y Yao; M Xu; C Choi; D J Crandall; E M Atkins; B Dariush", "journal": "", "ref_id": "b15", "title": "Egocentric vision-based future vehicle localization for intelligent driving assistance systems", "year": "2019" }, { "authors": "A Mohamed; D Zhu; W Vu; M Elhoseiny; C Claudel", "journal": "", "ref_id": "b16", "title": "Socialimplicit: Rethinking trajectory prediction evaluation and the effectiveness of implicit maximum likelihood estimation", "year": "2022" }, { "authors": "N Shafiee; T Padir; E Elhamifar", "journal": "", "ref_id": "b17", "title": "Introvert: Human trajectory prediction via conditional 3d attention", "year": "2021" }, { "authors": "S Zamboni; Z T Kefato; S Girdzijauskas; C Norén; L Dal Col", "journal": "Pattern Recognition", "ref_id": "b18", "title": "Pedestrian trajectory prediction with convolutional neural networks", "year": "2022" }, { "authors": "B Xia; C Wong; Q Peng; W Yuan; X You", "journal": "Pattern Recognition", "ref_id": "b19", "title": "Cscnet: Contextual semantic consistency network for trajectory prediction in crowded spaces", "year": "2022" }, { "authors": "Z Pei; X Qi; Y Zhang; M Ma; Y.-H Yang", "journal": "Pattern Recognition", "ref_id": "b20", "title": "Human trajectory prediction in crowded scene using social-affinity long short-term memory", "year": "2019" }, { "authors": "R Wang; X Song; Z Hu; Y Cui", "journal": "IEEE Transactions on Instrumentation and Measurement", "ref_id": "b21", "title": "Spatio-temporal interaction aware and trajectory distribution aware graph convolution network for pedestrian multimodal trajectory prediction", "year": "2022" }, { "authors": "A Alahi; K Goel; V Ramanathan; A Robicquet; L Fei-Fei; S Savarese", "journal": "", "ref_id": "b22", "title": "Social lstm: Human trajectory prediction in crowded spaces", "year": "2016" }, { "authors": "A Gupta; J Johnson; L Fei-Fei; S Savarese; A Alahi", "journal": "", "ref_id": "b23", "title": "Social gan: Socially acceptable trajectories with generative adversarial networks", "year": "2018" }, { "authors": "L Shi; L Wang; C Long; S Zhou; F Zheng; N Zheng; G Hua", "journal": "", "ref_id": "b24", "title": "Social interpretable tree for pedestrian trajectory prediction", "year": "2022" }, { "authors": "V Kosaraju; A Sadeghian; R Martín-Martín; I Reid; H Rezatofighi; S Savarese", "journal": "", "ref_id": "b25", "title": "Social-bigat: Multimodal trajectory forecasting using bicycle-gan and graph attention networks", "year": "2019" }, { "authors": "Y Yuan; X Weng; Y Ou; K M Kitani", "journal": "", "ref_id": "b26", "title": "Agentformer: Agent-aware transformers for socio-temporal multi-agent forecasting", "year": "2021" }, { "authors": "I Bae; J.-H Park; H.-G Jeon", "journal": "", "ref_id": "b27", "title": "Learning pedestrian group representations for multi-modal trajectory prediction", "year": "2022" }, { "authors": "A Sadeghian; V Kosaraju; A Sadeghian; N Hirose; H Rezatofighi; S Savarese", "journal": "", "ref_id": "b28", "title": "Sophie: An attentive gan for predicting paths compliant to social and physical constraints", "year": "2019" }, { "authors": "J Wang; J Zhao; Q Yin; X Luo; Y Zheng; Y.-Q Shi; S K Jha", "journal": "IEEE Transactions on Multimedia", "ref_id": "b29", "title": "Smsnet: A new deep convolutional neural network model for adversarial example detection", "year": "2021" }, { "authors": "M Wang; W Zhou; Q Tian; H Li", "journal": "IEEE Transactions on Multimedia", "ref_id": "b30", "title": "Deep graph convolutional quantization networks for image retrieval", "year": "2022" }, { "authors": "K Mangalam; Y An; H Girase; J Malik", "journal": "", "ref_id": "b31", "title": "From goals, waypoints & paths to long term human trajectory forecasting", "year": "2021" }, { "authors": "O Ronneberger; P Fischer; T Brox", "journal": "", "ref_id": "b32", "title": "U-net: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "K Guo; W Liu; J Pan", "journal": "", "ref_id": "b33", "title": "End-to-end trajectory distribution prediction based on occupancy grid maps", "year": "2022" }, { "authors": "X Shi; Z Chen; H Wang; D.-Y Yeung; W.-K Wong; W.-C Woo", "journal": "", "ref_id": "b34", "title": "Convolutional lstm network: A machine learning approach for precipitation nowcasting", "year": "2015" }, { "authors": "A Rasouli; M Rohani; J Luo", "journal": "", "ref_id": "b35", "title": "Bifold and semantic reasoning for pedestrian behavior prediction", "year": "2021" }, { "authors": "J Yue; D Manocha; H Wang", "journal": "Springer", "ref_id": "b36", "title": "Human trajectory prediction via neural social physics", "year": "2022" }, { "authors": "I Bae; J.-H Park; H.-G Jeon", "journal": "", "ref_id": "b37", "title": "Non-probability sampling network for stochastic human trajectory prediction", "year": "2022" }, { "authors": "Y Cao; C Xiao; A Anandkumar; D Xu; M Pavone", "journal": "", "ref_id": "b38", "title": "Advdo: Realistic adversarial attacks for trajectory prediction", "year": "2022" }, { "authors": "B Pang; T Zhao; X Xie; Y N Wu", "journal": "", "ref_id": "b39", "title": "Trajectory prediction with latent belief energy-based model", "year": "2021" }, { "authors": "J F P Kooij; N Schneider; F Flohr; D M Gavrila", "journal": "", "ref_id": "b40", "title": "Context-based pedestrian path prediction", "year": "2014" }, { "authors": "T Yagi; K Mangalam; R Yonetani; Y Sato", "journal": "", "ref_id": "b41", "title": "Future person localization in first-person videos", "year": "2018" }, { "authors": "W Xia; Q Wang; Q Gao; X Zhang; X Gao", "journal": "IEEE Transactions on Multimedia", "ref_id": "b42", "title": "Self-supervised graph convolutional network for multi-view clustering", "year": "2021" }, { "authors": "S Zhao; L Fei; J Wen; J Wu; B Zhang", "journal": "IEEE Transactions on Multimedia", "ref_id": "b43", "title": "Intrinsic and complete structure learning based incomplete multiview clustering", "year": "2021" }, { "authors": "M Cao; C Ding; C Chen; H Dou; X Hu; J Yan", "journal": "IEEE Transactions on Multimedia", "ref_id": "b44", "title": "Progressive context-aware graph feature learning for target re-identification", "year": "2022" }, { "authors": "M Jian; C Jung; Y Zheng", "journal": "IEEE transactions on multimedia", "ref_id": "b45", "title": "Discriminative structure learning for semantic concept detection with graph embedding", "year": "2013" }, { "authors": "M Mesgaran; A B Hamza", "journal": "IEEE Transactions on Multimedia", "ref_id": "b46", "title": "Anisotropic graph convolutional network for semi-supervised learning", "year": "2020" }, { "authors": "B Jiang; B Wang; B Luo", "journal": "Pattern Recognition", "ref_id": "b47", "title": "Sparse norm regularized attribute selection for graph neural networks", "year": "2023" }, { "authors": "Y Takai; A Miyauchi; M Ikeda; Y Yoshida", "journal": "", "ref_id": "b48", "title": "Hypergraph clustering based on pagerank", "year": "2020" }, { "authors": "A Mohamed; K Qian; M Elhoseiny; C Claudel", "journal": "", "ref_id": "b49", "title": "Social-stgcnn: A social spatio-temporal graph convolutional neural network for human trajectory prediction", "year": "2020" }, { "authors": "J Sun; Q Jiang; C Lu", "journal": "", "ref_id": "b50", "title": "Recursive social behavior graph for trajectory prediction", "year": "2020" }, { "authors": "C Xu; M Li; Z Ni; Y Zhang; S Chen", "journal": "", "ref_id": "b51", "title": "Groupnet: Multiscale hypergraph neural networks for trajectory prediction with relational reasoning", "year": "2022" }, { "authors": "J Fu; J Liu; H Tian; Y Li; Y Bao; Z Fang; H Lu", "journal": "", "ref_id": "b52", "title": "Dual attention network for scene segmentation", "year": "2019" }, { "authors": "H Zhang; C Wu; Z Zhang; Y Zhu; H Lin; Z Zhang; Y Sun; T He; J Mueller; R Manmatha", "journal": "", "ref_id": "b53", "title": "Resnest: Split-attention networks", "year": "2022" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; Ł Kaiser; I Polosukhin", "journal": "NeurIPS", "ref_id": "b54", "title": "Attention is all you need", "year": "2017" }, { "authors": "C Choy; J Gwak; S Savarese", "journal": "", "ref_id": "b55", "title": "4d spatio-temporal convnets: Minkowski convolutional neural networks", "year": "2019" }, { "authors": "M Chen; Z Wei; Z Huang; B Ding; Y Li", "journal": "", "ref_id": "b56", "title": "Simple and deep graph convolutional networks", "year": "2020" }, { "authors": "F Wu; A Souza; T Zhang; C Fifty; T Yu; K Weinberger", "journal": "", "ref_id": "b57", "title": "Simplifying graph convolutional networks", "year": "2019" }, { "authors": "Y Xu; L Wang; Y Wang; Y Fu", "journal": "", "ref_id": "b58", "title": "Adaptive trajectory prediction via transferable gnn", "year": "2022" }, { "authors": "R Dey; F M Salem", "journal": "MWSCAS", "ref_id": "b59", "title": "Gate-variants of gated recurrent unit (gru) neural networks", "year": "2017" }, { "authors": "H Zhao; R P Wildes", "journal": "", "ref_id": "b60", "title": "Where are you heading? dynamic trajectory prediction with expert goal examples", "year": "2021" }, { "authors": "J Sun; Y Li; H.-S Fang; C Lu", "journal": "", "ref_id": "b61", "title": "Three steps to multimodal trajectory prediction: Modality clustering, classification and synthesis", "year": "2021" }, { "authors": "A Rasouli; I Kotseruba; T Kunic; J K Tsotsos", "journal": "", "ref_id": "b62", "title": "Pie: A large-scale dataset and models for pedestrian intention estimation and trajectory prediction", "year": "2019" }, { "authors": "A Bhattacharyya; M Fritz; B Schiele", "journal": "", "ref_id": "b63", "title": "Long-term on-board prediction of people in traffic scenes under uncertainty", "year": "2018" }, { "authors": "P Xu; J.-B Hayet; I Karamouzas", "journal": "Springer", "ref_id": "b64", "title": "Socialvae: Human trajectory prediction using timewise latents", "year": "2022" }, { "authors": "A Rasouli; I Kotseruba; J K Tsotsos", "journal": "ICCVW", "ref_id": "b65", "title": "Are they going to cross? a benchmark dataset and baseline for pedestrian crosswalk behavior", "year": "2017" }, { "authors": "T Salzmann; B Ivanovic; P Chakravarty; M Pavone", "journal": "", "ref_id": "b66", "title": "Trajectron++: Dynamically-feasible trajectory forecasting with heterogeneous data", "year": "2020" } ]
[ { "formula_coordinates": [ 4, 151.85, 206.68, 147.67, 13.15 ], "formula_id": "formula_0", "formula_text": "s n = [s n 1 , s n 2 , ..., s n T obs ]. Note that s n t" }, { "formula_coordinates": [ 4, 48.96, 321.74, 96.66, 12.2 ], "formula_id": "formula_1", "formula_text": "G tem n ∈ {G tem 1 , G tem 2 , ." }, { "formula_coordinates": [ 4, 48.96, 345.65, 251.06, 24.28 ], "formula_id": "formula_2", "formula_text": "G tem n = (V n , E n , F n ), where V n = {v n i |i = {1, ...," }, { "formula_coordinates": [ 4, 48.96, 466.41, 251.06, 59.33 ], "formula_id": "formula_3", "formula_text": "G cat t = (V t , E t , F t ), where V t = {v i t |i = {1, ..., N }} is the vertex set of the graph G cat t . The pedestrian character information s i t is the attribute of v i t . E t = {e i,j t |i, j = {1, ..., N }} is the edge set of the graph G cat n . F t = {f i t |i = {1, ..., N }} ∈ R N ×D t" }, { "formula_coordinates": [ 4, 131.09, 628.35, 169.6, 28.9 ], "formula_id": "formula_4", "formula_text": "a n;i,j = ||v n i -v n j || 2 , a t;i,j = ||v i t -v j t || 2 ,(1)" }, { "formula_coordinates": [ 4, 139, 691.86, 161.69, 28 ], "formula_id": "formula_5", "formula_text": "f n i = ϕ(v n i , W n ), f i t = ϕ(v i t , W t ),(2)" }, { "formula_coordinates": [ 4, 357.73, 135.82, 205.97, 70.26 ], "formula_id": "formula_6", "formula_text": "Q tem i = ϕ(F n , W t Q ), K tem i = ϕ(F n , W t K ), O tem i = Softmax( Q tem i (K tem i ) T √ d t ), R t = Concat(O tem i ), i = 1, 2, ..., H,(3)" }, { "formula_coordinates": [ 4, 338.39, 216.03, 169.09, 14.39 ], "formula_id": "formula_7", "formula_text": "Q tem i ∈ R T obs ×D t q and K tem i ∈ R T obs ×D t k" }, { "formula_coordinates": [ 4, 509.91, 248.32, 52.38, 18.36 ], "formula_id": "formula_8", "formula_text": "√ d t = D t q" }, { "formula_coordinates": [ 4, 360.7, 298.32, 203.01, 70.26 ], "formula_id": "formula_9", "formula_text": "Q cat i = ϕ(F t , W c Q ), K cat i = ϕ(F t , W c K ), O cat i = Softmax( Q cat i (K cat i ) T √ d c ), R c = Concat(O cat i ), i = 1, 2, ..., H,(4)" }, { "formula_coordinates": [ 4, 339.17, 377.61, 147.8, 14.39 ], "formula_id": "formula_10", "formula_text": "Q cat i ∈ R N ×D c q and K cat i ∈ R N ×D c" }, { "formula_coordinates": [ 4, 387.85, 499.62, 175.85, 24.63 ], "formula_id": "formula_11", "formula_text": "J t = δ (Conv (R t , K)), J c = δ (Conv (R c , K)),(5)" }, { "formula_coordinates": [ 4, 400.94, 653.88, 162.76, 24.6 ], "formula_id": "formula_12", "formula_text": "M n = I(J t ≥ ξ), M t = I(J c ≥ ξ),(6)" }, { "formula_coordinates": [ 5, 122.49, 85.89, 178.21, 28.18 ], "formula_id": "formula_13", "formula_text": "Fn = Softmax(F n ⊙ M n ), Ft = Softmax(F t ⊙ M t ),(7)" }, { "formula_coordinates": [ 5, 144.9, 236.1, 155.79, 27.64 ], "formula_id": "formula_14", "formula_text": "A ′ n = A n + I, A ′ t = A t + I.(8)" }, { "formula_coordinates": [ 5, 48.96, 316.13, 251.06, 27.79 ], "formula_id": "formula_15", "formula_text": "F (l) n = {F (l) 1 , F (l) 2 , ..., F (l) N } ∈ R N ×D n f ×T obs and F (l) t = {F (l) 1 , F (l) 2 , ..., F(l)" }, { "formula_coordinates": [ 5, 48.96, 366.33, 251.06, 25.49 ], "formula_id": "formula_16", "formula_text": "F (l+1) n ∈ R N ×D n f ×T obs and F (l+1) t ∈ R T obs ×D t" }, { "formula_coordinates": [ 5, 103.63, 398.87, 197.06, 34 ], "formula_id": "formula_17", "formula_text": "F (l+1) n = σ(D -1 2 n Ân D 1 2 n F (l) n W (l) n ), F (l+1) t = σ(D -1 2 t Ât D 1 2 t F (l) t W (l) t ),(9)" }, { "formula_coordinates": [ 5, 143.41, 657.89, 157.28, 24.63 ], "formula_id": "formula_18", "formula_text": "F p = g enc (X), F g = g enc (Y ),(10)" }, { "formula_coordinates": [ 5, 380.97, 100, 182.74, 24.59 ], "formula_id": "formula_19", "formula_text": "q(Z|X, Y ) = N (µ Zq , σ Zq ), p(Z|X) = N (µ Zp , σ Zp ),(11)" }, { "formula_coordinates": [ 5, 369.77, 229.89, 193.93, 22.31 ], "formula_id": "formula_20", "formula_text": "L KLD = p(z) × [log( p(z) q(z) )],(12)" }, { "formula_coordinates": [ 5, 348.29, 427.22, 215.42, 13.75 ], "formula_id": "formula_21", "formula_text": "Ŷ = g dec (ϵ(F (l+1) n ⊕ F (l+1) t ) ⊕ F p ⊕ ϵ(G)),(13)" }, { "formula_coordinates": [ 5, 366.15, 500.4, 197.55, 52.01 ], "formula_id": "formula_22", "formula_text": "L T RJ = min k∈K || Ŷ (k) -Y || 2 , L GL = min k∈K || Ĝ(k) -G gt || 2 , L T SN et = L T RJ + L GL + L KLD ,(14)" } ]
2023-11-27
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b9", "b15", "b0", "b1", "b11", "b5", "b3", "b8", "b19", "b13", "b18" ], "table_ref": [], "text": "In the era of information explosion, mining data effectively has huge potential but is a difficult problem which takes time, money and labour effort. Multi-document summarization is a natural language processing task that is useful for solving this problem. Receiving the set of documents as input, the summarization system aims to select or generate important information to create a brief summary for these documents (Ježek and Steinberger, 2008). It is a complex problem that has gained attention from the research community. Several past challenges and shared tasks have focused on summarization. One of the earliest summarization shared tasks is the series of document understanding conference (DUC) challenges1 , the Text Analysis Con-ference (TAC) summarization shared tasks2 In recent years, some summarization shared tasks have been launched to support research and development in this field for English, such as DocEng 2019 (Lins et al., 2019) and BioNLP-MEDIQA 2021(Abacha et al., 2021), ect.\nBased on output characteristics, there are two major approaches for automatic summarization, i.e, extractive and abstractive summarization. Extractive summarization tends to select the most crucial sentences (sections) from the documents while abstractive summarization tries to rewrite a new summary based on the original important information (Allahyari et al., 2017). From the early 1950s, various methods have been proposed for extractive summarization ranging from frequency-based methods (Khan et al., 2019) to machine learningbased methods (Gambhir and Gupta, 2017). The extractive methods are fast and simple but the summaries are far from the manual-created summary, which can be remedied with the abstractive approach (El-Kassas et al., 2021). In the multi-document problem, extractive approaches show significant disadvantages in arranging and combining information from several documents. In recent years, sequence-to-sequence learning (seq2seq) makes abstractive summarization possible (Hou et al., 2017). A set of models based on encoder-decoder such as PEGASUS (Zhang et al., 2020), BART (Lewis et al., 2020), T5 (Raffel et al., 2020) achieves potential results for abstractive multi-document summarization. Studies on this problem for Vietnamese text are still in the early stages with a few initial achievements, especially in extractive approaches. In recent years, there has been a growing interest to develop automatic abstractive summarization systems. Despite these attempts, the lack of a comprehensive The remainder of the paper is organized as follows: Section 2 gives a detailed description of the Abmusu shared task and the task data. The next section describes the data construction, annotation methodologies and data collection. Section 3 describes the competition, baselines, approaches and respective results. Finally, Section 4 concludes the paper." }, { "figure_ref": [], "heading": "Task Description", "publication_ref": [], "table_ref": [], "text": "VLSP 2022 Abmusu shared task addressed an abstractive multi-document summarization task. The goal of Abmusu shared task is to develop summarization systems that could create abstractive summaries automatically for a set of documents on a topic. The model input is multiple news documents on the same topic, and the corresponding output is a related abstractive summary. In the scope of Abmusu shared task, we only focus on Vietnamese news. For multi-document summarization purposes, Abmusu task is aimed at summarizing multiple input documents that contain a piece of information related to the same topic, we call them 'document clusters'. Each cluster has 3 -5 documents that illustrate the same topic and the goal of this shared task is to build models to create an abstractive summary per cluster automatically.\n3 Task Data" }, { "figure_ref": [], "heading": "Data Preparation", "publication_ref": [ "b12" ], "table_ref": [], "text": "The data is automatically collected and filtered from Vietnamese electronic news on 8 categories, including the economy, society, culture, science and technology, etc. It is divided into training, validation and test datasets. The datasets contain several document clusters. Each cluster has 3 -5 documents that illustrate the same topic. On training and validation datasets, a manual-created reference abstractive summary is provided per cluster. The test set is formatted similarly to the training and validation sets, but without an abstractive summary.\nThe data preparation process is described in Figure 1. We used INCEpTION3 (Klie et al., 2018) as the annotation tool. It is a semantic annotation platform offering intelligent assistance and knowledge management. There are 10 human annotators and 2 experts who participated in the annotation process, the annotation guideline with full definition and illustrative examples was provided. We used an 8-step process to make summarization data, each data sample needs the involvement of at least 1 annotator and 1 reviewer:\n• Crawl data from news websites by categories.\n• Group documents into clusters by the highlighted hashtag, category, posted time, and similarity.\n• Remove duplicate or highly similar documents.\n• Remove clusters with too few articles, and review to select clusters/documents manually.\n• Choose 200 more clusters randomly to ensure the distribution for difficult test-cases.\n• Create the summary manually by the annotators.\n• Re-check the quality of the summary (by the reviewers) to ensure the quality and length. Unqualified data is relabeled by another annotator.\n• Refine all data by expert reviewers.\nAs a result, we prepared a total of 1, 839 documents in 600 clusters: 621 documents (200 clusters) for the training set, 304 documents (100 clusters) for the validation set and 914 documents (300 clusters) in the test set. Figure 2 " }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b14" ], "table_ref": [], "text": "The official evaluation measures are the ROUGE-2 scores and ROUGE-2 F1 (R2-F1) is the main score for ranking. ROUGE-2 Recall (R2-R), Precision (R2-P) and R2-F1 between predicted summary and reference summary are calculated as (Lin, 2004):\nR2-P = |Matched n-grams| |Predicted summary n-grams|(1)\nR2-R = |Matched n-grams| |Reference summary n-grams|\n(2)\nR2-F1 = 2 × R2-P × R2-R R2-P + R2-R(3)" }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [ "b4", "b6", "b17" ], "table_ref": [], "text": "The committee provided 4 baselines as the shared task benchmark, includes:\n• Ad-hoc rule-based baseline: The summary is the concatenation of the first and the last sentences of all component documents in each cluster.\n• Anchor text-based baseline: The summary is the concatenation of the anchor text of all component documents in each cluster.\n• Extractive baseline: The summary is generated by the extractive summarization model using Lexrank (Erkan and Radev, 2004) and MMR (Goldstein and Carbonell, 1998).\n• Abstractive baseline: The summary is generated by the abstractive summarization model ViT5 (Phan et al., 2022). " }, { "figure_ref": [], "heading": "Participants", "publication_ref": [], "table_ref": [], "text": "There are 46 registered teams from research groups in domestic and international Universities (VNU-HUS, VNU-UET, HUST, PTIT, etc.) and industries (Viettel, VinGroup, CMC, TopCV, VCCorp, etc).\nIn which, 28 teams submitted the data agreement, and 16 teams participated officially by submitting at least 1 run on the evaluation platform. Participant teams can use all possible tools and resources to build models. Participated teams made a total of 287 submissions. Post-challenge panels 5 are now opened on AIHUB for supporting research improvements." }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b4", "b16", "b2", "b7" ], "table_ref": [], "text": "An interesting observation is that the rulebased baseline achieved surprisingly high results (ranked 6). This result can be explained because most news are written in an explanatory or inductive style, so the first and last sentences often contain important information. The extractive baseline result (ranked 5) was much better than the anchor text baseline result (ranked 18), contrary to the assumption that the anchor text can be considered as a simple summary of the news text. In the abstractive baseline model, we only put raw data through the ViT5 model without any parameter tuning, so it is reasonable when its result was low (ranked 19). The proposed models followed two main approaches: abstractive summarization and hybrid approach. Participated teams used a variety of techniques, including similarity scoring (TF-IDF, Cosine, etc,), graph-based methods (i.e., Lexrank (Erkan and Radev, 2004), Textrank (Mihalcea and Tarau, 2004), Pagerank (Brin and Page, 1998), etc.), 5 http://aihub.ml/competitions/341 sentence classification (Long short-term memory (Hochreiter and Schmidhuber, 1997), BERT (Kenton and Toutanova, 2019), etc.) and text correlation. The results of the private test were considered as the official results to rank the team in Abmusu shared task. The results on ROUGE-2 of the top 5 teams and 4 baselines are shown on Table 3 (See Appendix A for the full results.). All 16 teams achieved performance higher than the anchor text baseline and abstractive baseline. There were 5 teams that achieved a higher F-score than our extractive and rule-based baselines. The best ROUGE-2 F obtained was 0.3035, the corresponding ROUGE-2 P and ROUGE-2 R are 0.3035 and 0.2298 respectively. " }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "The data was supported by the Project \"Research and Development of Vietnamese Multi-document Summarization Based on Advanced Language Models\" of Vietnam National University, Hanoi (Code: QG.22.61). The shared task committee would like to gratitude Dagoras Technology and Communications JSC. for their technical and financial support. We also thank all members of the Data Science and Knowledge Technology Laboratory, FIT, UET, VNU because of their continuous support and encouragement." } ]
This paper reports the overview of the VLSP 2022 -Vietnamese abstractive multi-document summarization (Abmusu) shared task for Vietnamese News. This task is hosted at the 9 th annual workshop on Vietnamese Language and Speech Processing (VLSP 2022). The goal of Abmusu shared task is to develop summarization systems that could create abstractive summaries automatically for a set of documents on a topic. The model input is multiple news documents on the same topic, and the corresponding output is a related abstractive summary. In the scope of Abmusu shared task, we only focus on Vietnamese news summarization and build a human-annotated dataset of 1,839 documents in 600 clusters, collected from Vietnamese news in 8 categories. Participated models are evaluated and ranked in terms of ROUGE2-F1 score, the most typical evaluation metric for document summarization problem.
Overview of the VLSP 2022 -Abmusu Shared Task: A Data Challenge for Vietnamese Abstractive Multi-document Summarization
[ { "figure_caption": "Figure 1 :1Figure 1: The annotation process.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "show the distribution of categories in the training/validation set and the test set. Table1 and Table 2describe the statistics of the Abmusu dataset in detail at the tokenand the sentence level. The compression ratio of Abmusu dataset is ∼ 9%, the manually created summaries often contain 4 -6 sentences. Average statistics and compression ratio at token-level", "figure_data": "AspectsTraining ValidationTestAverageDocuments per Cluster3.113.043.05Tokens per Cluster1924.751815.411762.40Tokens per Raw text619.88597.17578.46Tokens per Anchor text41.6535.5840.33Tokens per Summary168.48167.68153.05Compression ratioMulti-document Summary0.090.090.09AspectsTraining Validation TestAverageSentences per Cluster66.9360.6961.07Sentences per Raw text21.5619.9620.04Sentences per Anchor text1.721.271.57Sentences per Summary4.824.944.93Compression ratioMulti-document Summary0.070.080.08Table 2: Average statistics and compression ratio atsentence-level4 Challenge Results4.1 Data Format and SubmissionEach data example includes the title, anchor textand body text of all single documents in a clus-", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "The official top 5 results on the Private Test. The number highlighted in bold is the highest result in each column. The number in the bracket () is the corresponding rank of a score. Baseline results are shown in italic.", "figure_data": "The VLSP 2022 -Abmusu shared task was de-signed to promote the development of research forthe problem of abstractive multi-document sum-marization problem. We tend to compare differentsummarization approaches and provide a standardtest-bed for future research. The Abmusu datasetis constructed carefully, it is expected to make sig-nificant contributions to the other related works.Abmusu attracted the attention of the research com-munity, participated teams came up with many dif-ferent approaches and used a variety of advancedtechnologies and resources. We archived some ex-citing and potential results, which are useful bench-marks for future research. Finally, we happily con-clude that the VLSP 2022 -Abmusu shared taskwas run successfully and is expected to contributesignificantly to Vietnamese text mining and naturallanguage processing communities.", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "The official results on the Private Test. The number highlighted in bold is the highest result in each column. The number in the bracket () is the corresponding rank of a score. Baseline results are shown in italic.", "figure_data": "", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" } ]
Mai-Vu Tran; Hoang-Quynh Le; Duy-Cat Can; Quoc-An Nguyen
[ { "authors": "Asma Ben Abacha; M' Yassine; Yuhao Rabet; Chaitanya Zhang; Curtis Shivade; Dina Langlotz; Demner-Fushman", "journal": "", "ref_id": "b0", "title": "Overview of the mediqa 2021 shared task on summarization in the medical domain", "year": "2021" }, { "authors": "Mehdi Allahyari; Seyedamin Pouriyeh; Mehdi Assefi; Saeid Safaei; Elizabeth D Trippe; Juan B Gutierrez; Krys Kochut", "journal": "International Journal of Advanced Computer Science and Applications (ijacsa)", "ref_id": "b1", "title": "Text summarization techniques: A brief survey", "year": "2017" }, { "authors": "Sergey Brin; Lawrence Page", "journal": "Computer networks and ISDN systems", "ref_id": "b2", "title": "The anatomy of a large-scale hypertextual web search engine", "year": "1998" }, { "authors": "S Wafaa; El-Kassas; Ahmed A Cherif R Salama; Rafea; Hoda; Mohamed", "journal": "Expert Systems with Applications", "ref_id": "b3", "title": "Automatic text summarization: A comprehensive survey", "year": "2021" }, { "authors": "Günes Erkan; Dragomir R Radev", "journal": "Journal of artificial intelligence research", "ref_id": "b4", "title": "Lexrank: Graph-based lexical centrality as salience in text summarization", "year": "2004" }, { "authors": "Mahak Gambhir; Vishal Gupta", "journal": "Artificial Intelligence Review", "ref_id": "b5", "title": "Recent automatic text summarization techniques: a survey", "year": "2017" }, { "authors": "Jade Goldstein; Jaime G Carbonell", "journal": "", "ref_id": "b6", "title": "Summarization:(1) using mmr for diversity-based reranking and (2) evaluating summaries", "year": "1998-10-13" }, { "authors": "Sepp Hochreiter; Jürgen Schmidhuber", "journal": "Neural computation", "ref_id": "b7", "title": "Long short-term memory", "year": "1997" }, { "authors": "Liwei Hou; Po Hu; Chao Bei", "journal": "Springer", "ref_id": "b8", "title": "Abstractive document summarization via neural model with joint attention", "year": "2017" }, { "authors": "Karel Ježek; Josef Steinberger", "journal": "", "ref_id": "b9", "title": "Automatic text summarization (the state of the art 2007 and new challenges)", "year": "2008" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton ; Lee Kristina; Toutanova ", "journal": "", "ref_id": "b10", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Rahim Khan; Yurong Qian; Sajid Naeem", "journal": "International Journal of Information Engineering and Electronic Business", "ref_id": "b11", "title": "Extractive based text summarization using k-means and tf-idf", "year": "2019" }, { "authors": "Jan-Christoph Klie; Michael Bugert; Beto Boullosa; Richard Eckart De Castilho; Iryna Gurevych", "journal": "", "ref_id": "b12", "title": "The inception platform: Machine-assisted and knowledge-oriented interactive annotation", "year": "2018" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "", "ref_id": "b13", "title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2020" }, { "authors": "Chin-Yew Lin", "journal": "", "ref_id": "b14", "title": "Rouge: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "Rafael Ferreira Rafael Dueire Lins; Steve Mello; Simske", "journal": "", "ref_id": "b15", "title": "Doceng'19 competition on extractive text summarization", "year": "2019" }, { "authors": "Rada Mihalcea; Paul Tarau", "journal": "", "ref_id": "b16", "title": "Textrank: Bringing order into text", "year": "2004" }, { "authors": "Long Phan; Hieu Tran; Hieu Nguyen; H Trieu; Trinh", "journal": "", "ref_id": "b17", "title": "Vit5: Pretrained text-to-text transformer for vietnamese language generation", "year": "2022" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "J. Mach. Learn. Res", "ref_id": "b18", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Jingqing Zhang; Yao Zhao; Mohammad Saleh; Peter Liu", "journal": "PMLR", "ref_id": "b19", "title": "Pegasus: Pre-training with extracted gap-sentences for abstractive summarization", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 318.87, 364.82, 206.27, 24.43 ], "formula_id": "formula_0", "formula_text": "R2-P = |Matched n-grams| |Predicted summary n-grams|(1)" }, { "formula_coordinates": [ 3, 355.29, 456.75, 169.85, 24.43 ], "formula_id": "formula_1", "formula_text": "R2-F1 = 2 × R2-P × R2-R R2-P + R2-R(3)" } ]
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b9", "b17", "b46", "b14", "b4", "b20", "b46", "b56", "b20", "b25", "b30", "b43", "b53", "b56", "b54", "b0", "b56", "b20", "b14", "b19" ], "table_ref": [], "text": "Data, as a necessary resource for deep learning, has concurrently promoted algorithmic advancements while imposing challenges on researchers due to heavy demands on storage and computational resources [6, 10,18,47]. Confronted with the conflict between the requirement for high-precision [15], with a number following each method denoting the Image-Per-Class (IPC) setting. Previous methods are restricted by the heavier running time and memory consumption as IPC grows larger. In comparison, our proposed method notably reduces the demanding computational resources and also achieves state-of-the-art validation performance.\nmodels and overwhelming resource demands, dataset distillation is proposed to condense the rich information of a large-scale dataset into a small surrogate one [5,21,47,57]. Such a surrogate dataset is expected to achieve training performance comparable to that attained with the original one. Previous dataset distillation methods mostly engage in iterative optimization on fixed-number samples at the pixel level [21,26,30,31,44,54,57] or embedding level [4,55]. However, the sample-wise iterative optimization scheme suffers from problems of two perspectives. (1) The parameter space of optimization is positively correlated with the size of the target surrogate dataset and the image resolution [3,57]. Consequently, substantial time and computational resources are required for distilling larger datasets. As shown in Fig. 1, IDC-1 [21] takes over 90 hours to distill a 100-image-per-class (IPC) set from ImageWoof [15], while training on ImageWoof itself only requires a matter of hours. (2) The expanded parameter space also increases the optimization complexity. As shown in Fig. 2, while distillation yields significant information condensation under small IPC settings, the pixel modification diminishes when distilling larger-IPC datasets. The reduced disparity also leads to smaller performance gain compared with original images, with instances where the distilled set even performs worse. Especially when distilling data of fine-grained classes, the sample-wise optimization scheme fails to provide adequate discriminative information. These constraints severely hinder individual researchers from distilling personalized data. A more practical training scheme is urgently needed to facilitate the broader application of dataset distillation.\nIn this work, we explore the possibility of incorporating generative diffusion techniques [20,22,32] to efficiently compute effective surrogate datasets. We first conduct empirical analysis on the suitability of data generated by raw diffusion models for training networks. Based on the observations, we conclude that constructing an effective surrogate dataset hinges on two key factors: representativeness and diversity. Accordingly, we design extra minimax criteria for the generative training to enhance the capability of generating more effective surrogate datasets without explicit prompt designs. The minimax criteria involve two aspects: enforcing the generated sample to be close to the farthest real sample, while being far away from the most similar generated one. We provide theoretical analysis to support that the proposed minimax scheme aims to solve a well defined problem with all the criteria, including the generative accuracy and the minimax criteria, can be targeted simultaneously without detriment to the others.\nCompared with the astronomical training time consumption of the sample-wise iterative optimization schemes, the proposed method takes less than 1 hour to distill a 100-IPC surrogate dataset for a 10-class ImageNet subset, including " }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Problem Definition", "publication_ref": [ "b46", "b56" ], "table_ref": [], "text": "The general purpose of dataset distillation is to generate a small surrogate dataset S = {(x i , y i )} N S i=1 from a largescale one T = {(x i , y i )} N T i=1 [47,57]. Here each x i denotes an image with a corresponding class label y i , and N S ≪ N T . The surrogate dataset S is expected to encapsulate substantial information from the original T , such that training a model on S achieves performance comparable with that on T . After distilling, we train network models on S and validate the performance on the original test set." }, { "figure_ref": [ "fig_2" ], "heading": "Diffusion for Distillation", "publication_ref": [ "b7", "b33" ], "table_ref": [], "text": "Diffusion models learn a dataset distribution by gradually adding Gaussian noise to images and reversing back. Taking the latent diffusion model (LDM) as an example, given a training image x, the training process is separated into two parts. An encoder E transforms the image into the latent space z = E(x) and a decoder D reconstructs a latent code back to the image space x = D(z). The forward noising process gradually adds noise ϵ ∼ N (0, I) to the original latent code z 0 :\nz t = √ ᾱt z 0 + √ 1 -ᾱt ϵ,\nwhere ᾱt is a hyper-parameter. Provided with a conditioning vector c encoded with class labels, the diffusion models are trained by the squared error between the predicted noise ϵ θ (z t , c) and the ground truth ϵ:\nL simple = ||ϵ θ (z t , c) -ϵ|| 2 2 ,(1)\nwhere ϵ θ is a noise prediction network parameterized by θ. Diffusion models are proven to generate images of higher quality compared with GANs [8]. There are also some Parameter Efficient Fine-Tuning (PEFT) methods updating a small number of model parameters in order for the model to be better adapted to specific data domains [34,50]. We adopt DiT [33] as the baseline and Difffit [50] as the naive fine-tuning method for image generation. The generated images are compared with the original data from the perspective of embedding distribution in Fig. 3. The samples of random selection and pre-trained diffusion models present two extreme ends of the distribution. Random selection faithfully reflects the original distribution, yet fails to emphasize some high-density regions. In contrast, diffusion models are over-fitted to those dense areas, leaving a large part of the original distribution uncovered. We attribute these two distributions to two properties, respectively. The randomly selected data holds extraordinary diversity, and the diffusion-generated data shows representativeness to the original distribution. We claim that both properties are essential for constructing an effective surrogate dataset. By naive fine-tuning, Difffit better captures the representative regions but leaves more regions uncovered. To this end, we propose extra minimax criteria for the diffusion model to enhance both of the properties." }, { "figure_ref": [ "fig_2" ], "heading": "Minimax Diffusion Criteria", "publication_ref": [], "table_ref": [], "text": "Based on the observation that representativeness and diversity are two key factors to construct an effective surrogate dataset, we accordingly design extra minimax criteria to enhance these two essential properties for the diffusion model. Representativeness It is essential for the small surrogate dataset to sufficiently represent the original data. A naive approach to improve the representativeness is aligning the embedding distribution between synthetic and real samples:\nL r = arg max θ σ ẑθ (z t , c), 1 N B N B i=0 z i ,(2)\nwhere σ(•, •) is the cosine similarity, ẑθ (z t , c) is the predicted original embedding by subtracting the noise from the noisy embedding ẑθ (z t , c) = z t -ϵ θ (z t , c), and N B is the size of the sampled real sample mini-batch. However, the naive alignment tends to draw the predicted embedding towards the center of the real distribution, which severely limits the diversity. Therefore, we propose to maintain an auxiliary memory M = {z m } N M m=1 to store the real samples utilized in adjacent iterations, and design a minimax optimization objective as:\nL r = arg max θ min m∈[N M ] σ (ẑ θ (z t , c), z m ) .\n(3) By pulling close the least similar sample pairs, the diffusion model is encouraged to generate images that better cover the original distribution. It is notable that the diffusion training objective L simple itself encourages the generated images to resemble the original ones. Thus, the minimax criterion allows the preservation of diversity to the maximum extent.\nDiversity Although the pre-trained diffusion models already achieve satisfactory generation quality, the remaining defect is limited diversity compared with the original data, as shown in Fig. 3. We expect the data generated by the diffusion model can accurately reflect the original distribution, while simultaneously being different from each other. Hence, we maintain another auxiliary memory D = {z d } N D d=1 for the predicted embeddings of adjacent iterations and design another minimax objective to explicitly enhance the sample diversity as:\nL d = arg min θ max d∈[N D ] σ (ẑ θ (z t , c), z d ) .\n(4)\nThe diversity term has an opposite optimization target compared with the representativeness term, where the predicted embedding is pushed away from the most similar one stored in the memory bank. Although diversity is essential for an effective surrogate set, too much of it will cause the generated data to lose representativeness. The proposed minimax optimization enhances the diversity in a gentle way, with less influence on the class-related features. Combining all the components, we summarize the training process in Algorithm 1. The complete training objective can be formulated as:\nL = L simple + λ r L r + λ d L d ,(5)\nwhere λ r and λ d are weighting hyper-parameters." }, { "figure_ref": [], "heading": "Theoretical Analysis", "publication_ref": [], "table_ref": [], "text": "Assume that µ is the real distribution of the latent variables z associated with the target dataset T . We rewrite the optimization problem presented in Eq. ( 5) in a modified form:\nmin {θ (i) } i∈[N D ] λ d max i,j=1,..,N D σ ẑ(θ (i) ), ẑ(θ (j) ) + N D i=1 -λrQq,w∼µ σ ẑ(θ (i) ), w + ∥ẑ(θ (i) ) -z (i) 0 ∥ 2 ,(6)\nwhere Q q [•] denotes the quantile function with q as the quantile percentage. Note that here we consider a theoretical idealized variant of our algorithm wherein we perform simultaneous generation of all the embeddings {ẑ(θ (i) )}, rather than sample by sample. Hence the objectives turn to the sum of pairwise similarities rather than the form in Eq. (4). And we minimize the negative to aim for maximal representativeness, as in Eq. ( 5). It can be considered as a scalarized solution to a multiobjective optimization problem, wherein multiple criteria are weighed (see, e.g. [13]). This perspective aligns with a Pareto front with trade-offs. It means that one objective decreasing will by necessity result in another increasing.\nHowever, consider that any solution to the following trilevel optimization problem is also a solution for Eq. (6):\nmin {θ (i) } i∈[N D ] max i,j=1,..,N D σ ẑ(θ (i) ), ẑ(θ (j) ) subj. to {θ (i) } ∈ arg max N D i=1 Qq,w∼µ σ ẑ(θ (i) ), w subj. to θ (i) ∈ arg min ∥ẑ(θ) -z (i) 0 ∥ 2 , ∀i ∈ [ND].(7)\nIf a solution to Eq. ( 7) is discovered, either incidentally through solving Eq. ( 6) or by careful tuning of step sizes, the set of minimizers will be sufficiently large at both levels, with no trade-offs involved. However, can we justify the presumption that there exists a meaningful set of potential minimizers?\nDiffusion Process Model One popular framework for the mathematical analysis of diffusion involves analyzing the convergence and asymptotic properties of, appropriately homonymous, diffusion processes. These processes are characterized by the standard stochastic differential equation with a drift and diffusion term. For a time-dependent random variable Z t ,\ndZ t = V (Z t )dt + dW t (8\n)\nwhere V is a drift function dependent on the current Z t and dW t is a Wiener (Brownian noise) process. This equation serves as an appropriate continuous approximation of generative diffusion, given that Brownian noise is a continuous limit of adding normal random variables. Consequently, we aim for any realization z ∼ Z t to have certain desired properties that reflect generative modeling with high probability.\nThe work [43] established a theoretical model utilizing concepts in the control of these diffusions, demonstrating how it can result in sampling from the distribution of a desired data set. In the supplementary material we present a description of their framework and present an argument supporting the well-defined nature of the following problem, indicating that it has non-trivial solutions.\nWhen sampling optimally from the population dataset, we consider a stochastic control problem wherein V depends also on some chosen control u(z, t). This control aims to find the most representative samples and, among the possible collection of such samples, to obtain the most diverse one while sampling from the desired dataset µ. This involves solving:\nmin u(x,t) max i,j=1,..,N D σ Z u,(i) 1 , Z u,(j) 1 subj. to u ∈ arg max N D i=1 1 0 E Z (i) t Qq,w∼µ σ Z (i) t , w ds Z1 ∼ µ, dZ u,(i) t = u(Z u,(i) t , t)dt + dWt, t ∈ [0, 1]; Z0 = z0.(9)\nThis problem poses a bi-level stochastic control challenge where employing a layered dynamic programming is far from tractable. Additionally, a multi-stage stochastic programming approximation would also be infeasible given the scale of the datasets involved. Instead, we opt for parameterization with a neural network, forego exact sampling, discretize the problem, redefine the criteria to be time independent, and seek to solve an approximate solution for the tri-level optimization problem Eq. ( 7). In the supplementary material we provide a rationale for the meaningfulness of the problem in Eq. ( 9) based on the model of generative diffusion [43]. Specifically, we argue that the set of controls that leads to the desired final distribution and the set of minimizers, is sufficiently large for a low value of the objective at the top layer." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [], "table_ref": [], "text": "For the diffusion model, we adopt pre-trained DiT [33] as the baseline and conduct PEFT with Difffit [50]. λ r and λ d are set as 0.002 and 0.008 for Eq. ( 5), respectively. The image size for the diffusion fine-tuning and sample generation is set as 256×256. The fine-tuning mini-batch size is set as 8, and the fine-tuning lasts 8 epochs. The learning rate is set as 1e-3 for an AdamW optimizer. After fine-tuning, the images are generated by 50 denoising steps on a pre-defined number of random noise, according to the IPC setting. All the experiments are conducted on a single RTX 4090 GPU." }, { "figure_ref": [], "heading": "Datasets and Evaluation Metric", "publication_ref": [ "b14", "b20", "b41", "b20" ], "table_ref": [], "text": "For practical applicability, the experiments are exclusively conducted on full-sized ImageNet [6] subsets in this work. The selected subsets include ImageWoof, ImageNette [15] and the 10-class split adopted in [21,42], denoted as Im-ageIDC afterward. ImageWoof is a challenging subset, containing only classes of dog breeds, while ImageNette and ImageIDC contain classes with less similarity, and hence are easier to discriminate. For evaluation, we adopt the same setting as in [21]. The surrogate dataset is trained on different model architectures, with a learning rate of 0.01, " }, { "figure_ref": [], "heading": "Comparison with State-of-the-art Methods", "publication_ref": [ "b17", "b20", "b14", "b20", "b47", "b14", "b20" ], "table_ref": [], "text": "We compare our method with other state-of-the-art methods across different IPC settings and model architectures. For a fair comparison, the results are all reproduced by us under the same evaluation protocol. ResNet-10 [18] with average pooling is adopted for matching the feature distribution (DM [56], GLaD [4]) and training gradients (IDC-1 [21]). DM is implemented on IDC-1 by only modifying the matching objective from training gradients to feature distribution, such that better performance is achieved. Each experiment is conducted 3 times, with the mean value and standard variance reported. Firstly, we present the validation results on the challenging ImageWoof subset [15] in Tab. 1.\nWith the target of distilling surrogate datasets of small IPCs (e.g., 10 and 20), the pixel-level optimization method IDC-1 [21] demonstrates outstanding performance gain over random original images. However, as the IPC increases, the performance gain drastically drops. Especially under the 100-IPC setting, the distilled dataset even performs worse than random original images. This observation aligns with the empirical findings in Fig. 2, where pixellevel methods struggle to optimize the expanded parameter space of large IPCs. The embedding-level optimization method GLaD [4] yields good performance under the 10-IPC setting. However, it requires overwhelming GPU resources for larger IPC settings, which is inapplicable for resource-restricted scenarios. It is also notable that on large IPCs, the coreset method Herding [48] surpasses previous DD methods with far less computational cost. The pre-trained DiT [33] here serves as the baseline for generative diffusion techniques. Under the 50-IPC setting, DiT outperforms both random original images and IDC-1. However, the insufficiency of representativeness and diversity restricts its performance on smaller and larger IPCs, respectively. In contrast, our proposed minimax diffusion consistently provides superior performance across all the IPCs over both original images and Herding. Besides, the proposed method eliminates the need of specific network architectures for matching training metrics. Consequently, the cross-architecture generalization is significantly improved. Under most IPC settings, the performance gap between ConvNet-6 and ResNetAP-10 is even smaller than that of the original images. It validates the universality of the rich information learned by the minimax fine-tuning process.\nFurthermore, we extensively assess the Maximum Mean Discrepancy (MMD) between the embedded features of the selected/generated surrogate dataset and the original one in Tab. 2. The features are extracted by a ResNet-10 network pre-trained on the full original dataset. Our method achieves the lowest discrepancy by average, where DM [56] directly sets MMD as the optimization target, proving the validity of extra minimax criteria in fitting distributions.\nMoreover, we show the performance comparison on Im-ageNette [15] and ImageIDC [21] in Tab. 3. The performance trend generally aligns with that on ImageWoof. More specifically, on these two easier subsets, DiT quickly loses the advantage over original images as IPC increases. Conversely, our proposed minimax diffusion method consistently demonstrates state-of-the-art performance. Experiments on ImageNet-1K. We further conduct experiments on the full ImageNet-1K with the validation protocol of RDED [41] and present the results in Tab. 4. The synthetic images are resized to 224×224 for evaluation. The significant performance advantage over the compared works validates the scalability of the proposed method." }, { "figure_ref": [ "fig_3", "fig_2" ], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "Component Analysis. We compare the performance with baseline diffusion models to validate the effectiveness of proposed minimax criteria in Fig. 4. The experiments are conducted on ImageWoof and ImageIDC to evaluate the effect on challenging and easy tasks, respectively. Under the IPC of 10 and 20, the raw diffusion models (DiT) generate informative images, with validation performance much higher than randomly selected original samples. However, as the IPC is continuously increased, the performance gap diminishes for ImageWoof, and random original images surpass the DiT-generated ones at the IPC of 100. On Im-ageIDC the intersection occurs even earlier at the IPC of 50. The main reason is reflected in Fig. 3, where the sample diversity remains limited without external guidance. The naive Difffit fine-tuning adapts the model to specific domains, yet on large IPCs, the over-fitted generative model still yields inferior performance than the original images. The addition of representativeness constraint to the training process further enhances the effect of distribution fitting. At small IPCs, the generated images contain richer information, yet for larger IPCs, the lack of diversity brings a negative influence. The diversity constraint, in contrast, significantly boosts the information contained in the generated surrogate dataset. Despite the performance advantage of L d over L r , combining them still brings stable improvement as our full method. Especially on the easier ImageIDC task, grouping these two constraints together contributes to a consistent performance margin over random original images. The experimental results validate that both representativeness and diversity play essential parts in constructing effective surrogate datasets.\nMinimax Scheme. In this work, we propose to enhance the representativeness and diversity each with a minimax objective. We compare the distillation result with or with-out the minimax operation in Tab. 5. The first row presents the performance of naive Difffit fine-tuning. Matching the embeddings to the distribution center as in Eq. (2) severely degrades the validation performance across all IPCs. In contrast, the minimax version constraint as in Eq. (3) encourages better coverage, where the performance on small IPCs is improved. The effects of diversity constraint and the full method show similar trends. The superior performance suggests the effectiveness in enhancing the essential properties of the generative diffusion techniques." }, { "figure_ref": [ "fig_2", "fig_4", "fig_5" ], "heading": "Visualization", "publication_ref": [], "table_ref": [], "text": "Sample Distribution Visualization. The target of our proposed method is to construct a surrogate dataset with both representativeness and diversity. We visualize the t-SNE distribution of samples generated by our proposed method in Fig. 3. In comparison with random original images and baseline diffusion models, our method demonstrates a more thorough coverage over the entire data distribution while maintaining consistency in sample density. At the original high-density region, the generated images also form a dense sub-cluster, which is not reflected by random sampling. On the other hand, at the original sparse regions, our method exhibits better coverage than baseline diffusion models. By simultaneously enhancing the representativeness and diversity in the generative model, the proposed method manages to significantly improve the validation performance of the generated surrogate dataset. Generated Sample Comparison. The proposed method notably enhances the representativeness and diversity of the generated surrogate dataset. We compare the samples generated with the same random noise (for each column) of different generative methods in Fig. 5 to explicitly demonstrate the improved properties.\nThe images generated by baseline DiT exhibit a realistic high-quality appearance. However, the images tend to share similar poses and only present the most prominent features of the objects. In the golden retriever case, the generated images mostly present the head part, while for the churches the exterior appearance. Difffit fine-tuning further fits the model to the distribution, but in most cases, the differences only lie in small details. Comparatively, the proposed minimax criteria significantly enhance both the representativeness and diversity of the generated images. On the one hand, there occurs more class-related content in the generated images. The golden retriever images include more body parts and the church images encompass the interior layout. The minimax optimization leads to better coverage over the entire original distribution, with more related features encapsulated. On the other hand, the diversity is significantly enhanced, including variations in pose, background, and appearance styles. In such a way the surrogate dataset better represents the original large-scale one, leading to superior validation performance. More sample visualizations are provided in the supplementary material. Training Curve Visualization. We visualize the accuracy curve during the training process in Fig. 6a. The validation performance is rapidly improved as the fine-tuning process starts. After four epochs, the model tends to converge and reaches the highest performance at the 8th epoch. Further extending the training epochs injects excessive diversity into the model, leading to performance degradation. We demonstrate the influence of training epochs on the generated images in supplementary material." }, { "figure_ref": [ "fig_5", "fig_5", "fig_5" ], "heading": "Parameter Analysis", "publication_ref": [], "table_ref": [], "text": "Objective Weight λ r λ d . We show the influence of representativeness weight λ r and diversity weight λ d in Fig. 6b and Fig. 6c, respectively. The λ r variation only produces negligible performance fluctuation on small IPCs, while on large IPCs the performance is also relatively stable. For λ d , at a proper variation range, the performance is stable. However, continuously increasing the diversity of the generated dataset leads to a lack of representativeness, which results in a negative impact. The negative impact of overdiversity can also validated by the poor performance of K-Center in Tab. 1. A uniform performance decrease is observed as λ d reaches 0.03. Based on the performance of 100 IPC, we set λ r as 0.002 and λ d as 0.008. Memory Size N M . The memory size N M influences the number of samples involved in the objective calculation. We investigate its influence in Fig. 6d. When the memory is extremely small (N M =16), the provided supervision is also limited, yet the performance is already higher than naive fine-tuning. As the memory size is increased in a proper range, the model yields stable performance improvement. It is notable that with a larger memory, the performance under the IPC of 10 is better. It can be explained by that a larger memory contains more representative information. Out of the consideration of performance as well as storage burden, we select the memory of 64 in the other experiments." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we propose a novel dataset distillation method based on generative diffusion techniques. Through extra minimax criteria, the proposed method significantly enhances the representativeness and diversity of the generated surrogate dataset. With much less computational time consumption, the proposed method achieves state-of-the-art validation performance on challenging ImageNet subsets. It reduces the resource dependency of previous dataset distillation methods and opens up new possibilities for more practical applications for distilling personalized data. Limitations and Future Works. This work mainly focuses on the classification task. In future works, we will explore the possibility of incorporating generative techniques for more specific data domains." }, { "figure_ref": [], "heading": "Efficient Dataset Distillation via Minimax Diffusion", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [], "table_ref": [], "text": "The supplementary material is organized as follows: Section 6 provides more detailed theoretical analysis; Section 7 presents the related work to this paper; Section 8 elaborates upon the method pipeline; Section 9 contains additional implementation details; Section 10 presents some ablation studies; Section 11 discusses the broader impact; and finally, Section 12 presents ethical considerations." }, { "figure_ref": [], "heading": "Theoretical Analysis", "publication_ref": [ "b8", "b37" ], "table_ref": [], "text": "We present the most relevant parts of the referred work [43, Section 2.1-2.2]. Consider that the diffusion takes place over the finite interval [0, 1] and let µ be the desired sample distribution, such that Z 1 ∼ µ. Assume µ is absolutely continuous with respect to the standard Gaussian, denoted by γ d , and define the Radon-Nikodym derivative f = dµ/dγ d . Then the optimal control, defined in the literature as the Föllmer drift and expressed as 8), then this drift would minimize the cost-to-go function:\nu * (z, t) = ∇ log Q 1-t (f ) = ∇ log 1 (2π(1-t)) d/2 f (y) exp -1 2(1-t) ∥z -y∥ 2 dy would be such that if V (Z t ) = u * (z, t) in Eq. (\nJ u (z, t) := E 1 2 1 t ∥u s ∥ 2 ds -log f (Z u 1 )|Z u t = z .\nEquivalently, such a control is the one that, among all such transportation that maps from γ d to µ, minimizes 1 0 ∥u s ∥ 2 ds [14,24]. The structure of this process presents the opportunity for accurately performing diffusion, enforcing Z u 1 ∼ µ, while simultaneously pursuing additional criteria. Specifically: 1. Immediately we recognize that a nontrivial transportation problem implies the existence of a set (i.e., a nonunique solution to the constraint satisfaction problem) of possible drifts such that the final distribution is µ. We can consider maximizing representativeness as an alternative cost criterion to ∥u s ∥ 2 ds. To present the criteria in a sensible way, given that the training is conducted on a minimum across mini-batches, we can instead aim to maximize a bottom quantile, by the costto-go functional,\nJ r (z, t) = 1 t Q q,w∼µ [σ (Z t , w)] ds,\nwhere q is the quantile percentage, e.g. 0.02 (for instance, if a mini-batch of fifty samples were given, this would be the minimum).\n2. Next, notice that with dataset distillation, the small sample size is significant, which suggests that we can consider the aggregate in a particle framework, where for i = 1, ..., N D , we have,\ndZ u,(i) t = u(Z u,(i) t , t)dt + dW t , t ∈ [0, 1]; Z u,(i) 0 = z 0\npresenting an additional degree of freedom, which we take advantage of by encouraging diversity, i.e., minimizing\nJ d (z, 1) = max i,j=1,..,N D σ Z u,(i) 1 , Z u,(j) 1\n.\nSince generation accuracy and representativeness are criteria for individual particles, maximizing diversity across particles can be considered as optimizing with respect to the additional degree of freedom introduced by having multiple particles. Thus, we can see that it presents the opportunity to consider generative diffusion as a bi-level stochastic control problem.\nA brief note on convergence guarantees for Eq. ( 7) presented in the main paper. A straightforward extension of [9] to three layers (similar to the extension from bi-level to trilevel convex optimization in [38]) yields convergence guarantees in expectation to a stationary point for all objectives. It is important to note that in the case of nonconvex objectives, the asymptotic point will satisfy a fairly weak condition. Specifically, it may not be stationary for the top objective, as the lower levels are not necessarily at global minimizers. This is, however, the best that can be ensured with stochastic gradient based methods and similar." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Dataset Distillation", "publication_ref": [ "b4", "b20", "b46", "b56", "b20", "b39", "b6", "b26", "b30", "b20", "b2", "b25", "b43", "b53", "b56" ], "table_ref": [], "text": "Dataset distillation (DD) aims to condense the information of large-scale datasets into small amounts of synthetic images with close training performance [5,21,47,57]. The informative images are also useful for tasks like continual learning [16,21], federated learning [25, 51] and neural architecture search [40]. Previous DD works can be roughly divided into bi-level optimization and training metric matching methods. Bi-level optimization methods incorporate meta learning into the surrogate image update [7,27,28,30,31,59]. In comparison, metric matching methods optimize the synthetic images by matching the training gradients [21,23,26,44,54,57] The training pipeline of the proposed minimax diffusion fine-tuning. The DiT blocks predict the added noise and original embeddings (dark-blue crossings). Then the parameters are updated with the simple diffusion objective and the minimax objectives. The minimax objectives (the right part) enforce the predicted embedding to be close to the farthest real sample and be far away from the closest predicted embedding of adjacent iterations." }, { "figure_ref": [], "heading": "Data Generation with Diffusion", "publication_ref": [ "b7", "b19", "b18", "b35", "b0", "b38" ], "table_ref": [], "text": "The significantly improved image quality and sample diversity by diffusion models opens up new possibilities for data generation [8,20,22,32]. Through prompt engineering [12, 19,36], latent interpolation [60] and classifier-free guidance [1,60], the diversity-improved synthetic images are useful to serve as augmentation or expansion for the original samples. The generated images also contribute to zero-shot image classification tasks [39]. However, these works mainly focus on recovering the original distribution with equal or much larger amounts of data. In contrast, we intend to distill the rich data information into small surrogate datasets. Moreover, prompt engineering usually requires special designs according to different data classes, while our proposed method saves extra effort. As far as we have investigated, there are no previous attempts to incorporate generative diffusion techniques into the dataset distillation task. In addition to diffusion models, there are also some previous works considering the diversity issue for Generative Adversarial Networks (GANs) [2, 17, 29, 52]. However, the improvement in diversity is not reflected in downstream tasks. In this work, we seek to enhance both representativeness and diversity for constructing a small surrogate dataset with similar training performance compared with original large-scale ones." }, { "figure_ref": [ "fig_6", "fig_6" ], "heading": "Method Pipeline", "publication_ref": [], "table_ref": [], "text": "We demonstrate the pipeline of the proposed minimax finetuning method in Fig. 7. The real images are first passed through the encoder E to obtain the original embeddings z. Random noise ϵ is then added to the embeddings by the diffusion process. The DiT blocks then predict the added noise, with which the predicted original embeddings ẑ (dark-blue crossings in Fig. 7) are also able to be calculated. We maintain two auxiliary memories M (grey dots) and D (light-blue crossings) to store the encountered real embeddings and predicted embeddings at adjacent iterations, respectively. The denoised embeddings of the current iteration are pushed away from the most similar predicted embedding and are pulled close to the least similar real embedding. The DiT blocks are optimized with the proposed minimax criteria and the simple diffusion training loss L simple as in Eq. ( 1). At the inference stage, given a random noise together with a specified class label, the DiT network predicts the noise that requires to be subtracted. Then the Decoder D recovers the images from the denoised embeddings." }, { "figure_ref": [], "heading": "More Implementation Details", "publication_ref": [ "b20", "b25", "b53", "b17", "b17", "b20" ], "table_ref": [], "text": "We conduct experiments on three commonly adopted network architectures in the area of DD, including: 1. ConvNet-6 is a 6-layer convolutional network. In previous DD works where small-resolution images are distilled, the most popular network is ConvNet-3 [21,26,54]. We extend an extra 3 layers for full-sized 256×256 ImageNet data. The network contains 128 feature channels in each layer, and instance normalization is adopted. 2. ResNetAP-10 is a 10-layer ResNet [18], where the strided convolution is replaced by average pooling for downsampling. 3. ResNet-18 is a 18-layer ResNet [18] with instance normalization (IN). As the IN version performs better than batch normalization under our protocol, we uniformly adopt IN for the experiments.\nFor diffusion fine-tuning, an Adam optimizer is adopted with the learning rate set as 0.001, which is consistent with the original Difffit setting [50]. We set the mini-batch size as 8 mainly due to the GPU memory limitation. The employed augmentations during the fine-tuning stage include random resize-crop and random flip.\nFor the validation training, we adopt the same protocol as in [21]. Specifically, a learning rate of 0.01 for an Adam optimizer is adopted. The training epoch setting is presented in Tab. 6. The reduced training epochs also partly explain the reason why the performance gap between the IPC set- " }, { "figure_ref": [], "heading": "IPC (Ratio) Test Model", "publication_ref": [ "b47", "b20", "b20", "b20" ], "table_ref": [], "text": "Random Herding [48] IDC-1 [21] Ours Full 10 (0.8%) tings of 50 and 70 is relatively small. The adopted data augmentations include random resize-crop and CutMix. In addition to the 10-class ImageNet subsets and full ImageNet-1K, we also conduct experiments on ImageNet-100, and the results are shown in Tab. 7. The validation protocol follows that in IDC [21]. Due to the limitation of computational resources, here we directly employ the official distilled images of IDC-1 [21] for evaluation. The original resolution is 224×224, and we resize the images to 256×256 for fair comparison. Under the IPC setting of 10, IDC-1 achieves the best performance. Yet when the IPC increases, the performance gap between the distilled images of IDC-1 and randomly selected original images is smaller. Comparatively, our proposed minimax diffusion method consistently provides a stable performance improvement over original images across different IPC settings. It is worth noting that for IDC-1, the distillation process on ImageNet-100 demands hundreds of hours, while the proposed minimax diffusion only requires 10 hours. The significantly reduced training time offers much more application possibilities for the dataset distillation techniques." }, { "figure_ref": [], "heading": "More Analysis and Discussion", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_8" ], "heading": "Diffusion Denoising Step", "publication_ref": [], "table_ref": [], "text": "In our experiments, we set the diffusion denoising step number as 50. We evaluate its influence on the validation performance in Tab. 8. There are no fixed patterns for achieving better performance across all the IPCs. Additionally, we compare the generated images under different step settings in Fig. 8. For DiT [33], the denoising process is conducted in the embedding space. Therefore, it is reasonable that with different steps the generated images are variant in the pixel a small ratio compared with the original data, a considerable performance improvement is still achieved. The results support that the proposed minimax diffusion can also be explored as a dataset expansion method in future works." }, { "figure_ref": [ "fig_9" ], "heading": "Generated Samples of Different Epochs", "publication_ref": [], "table_ref": [], "text": "We visualize the images generated by models after different epochs of training in Fig. 10 to explicitly demonstrate the training effect of the proposed minimax diffusion method.\nAs the training proceeds, the generated images present variation trends from several perspectives. Firstly, the images tend to have more complicated backgrounds and environments, such as more realistic water and objects of other categories (e.g. human). Secondly, there are more details filled in the images, like the clothes in the first column and the red spots in the sixth. These new facets significantly enhance the diversity of the generated surrogate dataset. Furthermore, through the fine-tuning process, the class-related features are also enhanced. In the ninth and tenth columns, the model at the fourth epoch fails to generate objects with discriminative features. In comparison, the images generated by subsequent models demonstrate substantial improvement regarding the representativeness property.\n10.6. Generation Quality Evaluation.\nWe further report quantitative evaluations on the generation quality by adding the proposed minimax criteria in Tab. 10. The representativeness and diversity constraints improve the precision and recall of the generated data, respectively. The full method finds a balanced point between these two properties while obtaining the best coverage over the whole distribution. The fine-tuning brings negligible influence on the FID metric. And all the metrics of our proposed method are significantly better than those attained by DM [56]." }, { "figure_ref": [ "fig_10", "fig_19" ], "heading": "Generated Samples of Different Classes", "publication_ref": [ "b47" ], "table_ref": [], "text": "We present the comparison between the samples selected by Herding [48] and those generated by our proposed minimax diffusion method on ImageNet-100 from Fig. 11 to Fig. 20.\nIn most cases, the diffusion model is able to generate realistic images, which cannot easily be told from real samples. Herding also aims to select both representative and diverse samples. However, the lack of supervision on the semantic level led to the inclusion of noisy samples. For instance, the walking stick class contains images of mantis, which can originally be caused by mislabeling. The proposed minimax diffusion, in comparison, accurately generates images of the corresponding classes, which is also validated by the better performance shown in Tab. 7. There are also some failure cases for the diffusion model. The fur texture of hairy animals like Shih-Tzu and langur is unrealistic. The structures of human faces and hands also require further refinement. We treat these defects as exploration directions of future works for both diffusion models and the dataset distillation usage." }, { "figure_ref": [], "heading": "Broader Impacts", "publication_ref": [], "table_ref": [], "text": "The general purpose of dataset distillation is to reduce the demands of storage and computational resources for training deep neural networks. The requirement of saving resource consumption is even tenser at the age of foundation models. Dataset distillation aims to push forward the process of environmental contributions. From this perspective, the proposed minimax diffusion method significantly reduces the requirement resources for the distillation process itself. We hope that through this work, the computer vision society can put more attention on practical dataset distillation methods, which are able to promote the sustainable development of society." }, { "figure_ref": [], "heading": "Ethical Considerations", "publication_ref": [], "table_ref": [], "text": "There are no direct ethical issues attached to this work. We employ the publicly available ImageNet dataset for experiments. In future works, we will also be devoted to considering the generation bias and diversity during constructing a small surrogate dataset. " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Source code and generated data are available in https://github.com/vimar-gu/MinimaxDiffusion." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "† This work has received funding from the European Union's Horizon Europe research and innovation program under grant agreement No. 101084642." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "space. It can be observed that under all steps, the model generates high-quality images with sufficient diversity. Taking the calculation time into consideration, we simply select 50 steps in our experiments." }, { "figure_ref": [], "heading": "Parameter Analysis on ImageIDC", "publication_ref": [], "table_ref": [], "text": "We extensively demonstrate the parameter analysis on Im-ageIDC to illustrate the robustness of the hyper-parameters. Fig. 9a shows the performance curve along the training epochs. As the training process starts, the representativeness constraint quickly improves the accuracy of small IPCs. Further training enhances the diversity, where the performance on large and small IPCs shows different trends. Generally, the generated images achieve the best performance at the 8th epoch, which is consistent with the Im-ageWoof experiments.\nCompared with the results on ImageWoof, further enlarging the representativeness weight λ r improves the performance on small IPCs, as illustrated in Fig. 9b. In comparison, increasing diversity causes a drastic performance drop in Fig. 9b. Although the default settings remain relatively better choices, the balance point between representa- tiveness and diversity is worthy of further exploration. The memory size N M merely has a mild influence on the performance, which aligns with that of ImageWoof." }, { "figure_ref": [], "heading": "Extension to Dataset Expansion", "publication_ref": [], "table_ref": [], "text": "In addition to the standard dataset distillation task, where a small surrogate dataset is generated to replace the original one, we also evaluate the capability of the generated images as an expanded dataset. We add the generated 100-IPC surrogate dataset to the original ImageWoof (approximately 1,300 images per class) and conduct the validation in Tab. 9. As can be observed, although the extra images only take up" } ]
Dataset distillation reduces the storage and computational consumption of training a network by generating a small surrogate dataset that encapsulates rich information of the original large-scale one. However, previous distillation methods heavily rely on the sample-wise iterative optimization scheme. As the images-per-class (IPC) setting or image resolution grows larger, the necessary computation will demand overwhelming time and resources. In this work, we intend to incorporate generative diffusion techniques for computing the surrogate dataset. Observing that key factors for constructing an effective surrogate dataset are representativeness and diversity, we design additional minimax criteria in the generative training to enhance these facets for the generated images of diffusion models. We present a theoretical model of the process as hierarchical diffusion control demonstrating the flexibility of the diffusion process to target these criteria without jeopardizing the faithfulness of the sample to the desired distribution. The proposed method achieves state-of-the-art validation performance while demanding much less computational resources. Under the 100-IPC setting on Im-ageWoof, our method requires less than one-twentieth the distillation time of previous methods, yet yields even better performance.
Efficient Dataset Distillation via Minimax Diffusion
[ { "figure_caption": "Figure 1 .1Figure1. The validation accuracy and distillation time of different methods on ImageWoof[15], with a number following each method denoting the Image-Per-Class (IPC) setting. Previous methods are restricted by the heavier running time and memory consumption as IPC grows larger. In comparison, our proposed method notably reduces the demanding computational resources and also achieves state-of-the-art validation performance.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "1 arXivFigure 2 .12Figure 2. Sample images distilled by the pixel-level sample-wise optimization method DM [56] on ImageWoof. As the parameter space increases along with the Image-Per-Class (IPC) setting, with the same initialization, the appearance disparity between original and distilled images is smaller.", "figure_data": "", "figure_id": "fig_1", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. The feature distribution comparison of different image generation methods with the original set. The validation performance of each surrogate set is listed in the upper-right corner.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. With the help of the minimax diffusion, the proposed method significantly enhances the representativeness and diversity of the generated images. Thereby it consistently provides superior performance compared with random selection and baseline diffusion models by a large margin across different IPC settings.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Visualization of random original images, images generated by baseline diffusion models (DiT [33] and Difffit [50]) and our proposed method. For each column, the generated images are based on the same random seed. Comparatively, our method significantly enhances the coverage of original data distribution and the diversity of the surrogate dataset.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Hyper-parameter analysis on (a) the training epochs; (b) the representativeness weight λr; (c) the diversity weight λ d ; (d) the memory size NM . The results are obtained with ResNetAP-10 on ImageWoof. The dashed line indicates the value adopted in this work.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. The training pipeline of the proposed minimax diffusion fine-tuning. The DiT blocks predict the added noise and original embeddings (dark-blue crossings). Then the parameters are updated with the simple diffusion objective and the minimax objectives. The minimax objectives (the right part) enforce the predicted embedding to be close to the farthest real sample and be far away from the closest predicted embedding of adjacent iterations.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Visualization of images generated by the same model with different denoising steps. For each column, the generated images are based on the same random seed.", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "10. 1 .1Experiments to ImageNet-100", "figure_data": "", "figure_id": "fig_9", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 11 .11Figure 11. Comparison between samples selected by Herding (left) and generated by the proposed minimax diffusion method (right) for ImageNet-100 classes 0-9. The class names are marked at the left of each row.", "figure_data": "", "figure_id": "fig_10", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 .12Figure 12. Comparison between samples selected by Herding (left) and generated by the proposed minimax diffusion method (right) for ImageNet-100 classes 10-19. The class names are marked at the left of each row.", "figure_data": "", "figure_id": "fig_11", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 13 .13Figure 13. Comparison between samples selected by Herding (left) and generated by the proposed minimax diffusion method (right) for ImageNet-100 classes 20-29. The class names are marked at the left of each row.", "figure_data": "", "figure_id": "fig_12", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 .14Figure 14. Comparison between samples selected by Herding (left) and generated by the proposed minimax diffusion method (right) for ImageNet-100 classes 30-39. The class names are marked at the left of each row.", "figure_data": "", "figure_id": "fig_13", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 .15Figure 15. Comparison between samples selected by Herding (left) and generated by the proposed minimax diffusion method (right) for ImageNet-100 classes 40-49. The class names are marked at the left of each row.", "figure_data": "", "figure_id": "fig_14", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Figure 16 .16Figure 16. Comparison between samples selected by Herding (left) and generated by the proposed minimax diffusion method (right) for ImageNet-100 classes 50-59. The class names are marked at the left of each row.", "figure_data": "", "figure_id": "fig_15", "figure_label": "16", "figure_type": "figure" }, { "figure_caption": "Figure 17 .17Figure 17. Comparison between samples selected by Herding (left) and generated by the proposed minimax diffusion method (right) for ImageNet-100 classes 60-69. The class names are marked at the left of each row.", "figure_data": "", "figure_id": "fig_16", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "Figure 18 .18Figure 18. Comparison between samples selected by Herding (left) and generated by the proposed minimax diffusion method (right) for ImageNet-100 classes 70-79. The class names are marked at the left of each row.", "figure_data": "", "figure_id": "fig_17", "figure_label": "18", "figure_type": "figure" }, { "figure_caption": "Figure 19 .19Figure 19. Comparison between samples selected by Herding (left) and generated by the proposed minimax diffusion method (right) for ImageNet-100 classes 80-89. The class names are marked at the left of each row.", "figure_data": "", "figure_id": "fig_18", "figure_label": "19", "figure_type": "figure" }, { "figure_caption": "Figure 20 .20Figure 20. Comparison between samples selected by Herding (left) and generated by the proposed minimax diffusion method (right) for ImageNet-100 classes 90-99. The class names are marked at the left of each row.", "figure_data": "", "figure_id": "fig_19", "figure_label": "20", "figure_type": "figure" }, { "figure_caption": "Minimax Diffusion Fine-tuning Input: initialized model parameter θ, original dataset T = {(x, y)}, encoder E, class encoder Ec, time step t, variance schedule ᾱt, real embedding memory M, predicted embedding memory D", "figure_data": "Output: optimized model parameter θ *for each step doObtain the original embedding: z0 = E(x)Obtain the class embedding: c = Ec(y)Sample random noise: ϵ ∼ N (0, I)Add noise to the embedding: zt = √ ᾱtz0 + √ 1 -ᾱtϵPredict the noise ϵ θ (zt, c) and recovered embeddingẑθ (zt, c) = zt -ϵ θ (zt, c)Update the model parameter with Eq. (5)Enqueue the real embedding: Mr ← z0Enqueue the predicted embedding: M d ← ẑθ (zt, c)end", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Performance comparison with pre-trained diffusion models and other state-of-the-art methods on ImageWoof. All the results are reproduced by us on the 256×256 resolution. The missing results are due to out-of-memory. The best results are marked as bold.", "figure_data": "IPC (Ratio) Test ModelRandom K-Center [37] Herding [48] DiT [33] DM [56] IDC-1 [21] GLaD [4]OursFullConvNet-624.3±1.119.4±0.926.7±0.534.2±1.1 26.9±1.2 33.3±1.133.8±0.9 37.0±1.0 86.4±0.210 (0.8%)ResNetAP-10 29.4±0.822.1±0.132.0±0.334.7±0.5 30.3±1.2 39.1±0.532.9±0.9 39.2±1.3 87.5±0.5ResNet-1827.7±0.921.1±0.430.2±1.234.7±0.4 33.4±0.7 37.3±0.231.7±0.8 37.6±0.9 89.3±1.2ConvNet-629.1±0.721.5±0.829.5±0.336.1±0.8 29.9±1.0 35.5±0.8-37.6±0.2 86.4±0.220 (1.6%)ResNetAP-10 32.7±0.425.1±0.734.9±0.141.1±0.8 35.2±0.6 43.4±0.3-45.8±0.5 87.5±0.5ResNet-1829.7±0.523.6±0.332.2±0.640.5±0.5 29.8±1.7 38.6±0.2-42.5±0.6 89.3±1.2ConvNet-641.3±0.636.5±1.040.3±0.746.5±0.8 44.4±1.0 43.9±1.2-53.9±0.6 86.4±0.250 (3.8%)ResNetAP-10 47.2±1.340.6±0.449.1±0.749.3±0.2 47.1±1.1 48.3±1.0-56.3±1.0 87.5±0.5ResNet-1847.9±1.839.6±1.048.3±1.250.1±0.5 46.2±0.6 48.3±0.8-57.1±0.6 89.3±1.2ConvNet-646.3±0.638.6±0.746.2±0.650.1±1.2 47.5±0.8 48.9±0.7-55.7±0.9 86.4±0.270 (5.4%)ResNetAP-10 50.8±0.645.9±1.553.4±1.454.3±0.9 51.7±0.8 52.8±1.8-58.3±0.2 87.5±0.5ResNet-1852.1±1.044.6±1.149.7±0.851.5±1.0 51.9±0.8 51.1±1.7-58.8±0.7 89.3±1.2ConvNet-652.2±0.445.1±0.554.4±1.153.4±0.3 55.0±1.3 53.2±0.9-61.1±0.7 86.4±0.2100 (7.7%)ResNetAP-10 59.4±1.054.8±0.261.7±0.958.3±0.8 56.4±0.8 56.1±0.9-64.5±0.2 87.5±0.5ResNet-1861.5±1.350.4±0.459.3±0.758.9±1.3 60.2±1.0 58.3±1.2-65.7±0.4 89.3±1.2", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "The Maximum Mean Discrepancy (MMD) between the extracted features of surrogate dataset and the original one.", "figure_data": "IPC DiT [33] Difffit [50] DM [56] IDC-1 [21] Ours505.45.44.86.74.01005.55.34.06.44.3and a scheduler decaying the learning rate at 2/3 and 5/6of the whole training iterations. The top-1 accuracy on theoriginal testing set is reported to illustrate the performance.", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Performance comparison with pre-trained diffusion models and state-of-the-art methods on more ImageNet subsets. The results are obtained on ResNet-10 with average pooling. The best results are marked as bold.", "figure_data": "IPCRandomDiT [33]DM [56]OursNette10 20 5054.2±1.6 63.5±0.5 76.1±1.159.1±0.7 64.8±1.2 73.3±0.960.8±0.6 66.5±1.1 76.2±0.462.0±0.2 66.8±0.4 76.6±0.2IDC10 20 5048.1±0.8 52.5±0.9 68.1±0.754.1±0.4 58.9±0.2 64.3±0.652.8±0.5 58.5±0.4 69.1±0.853.1±0.2 59.0±0.4 69.6±0.2", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Performance comparison on ImageNet-1K.", "figure_data": "IPC SRe 2 L [53] RDED [41]DiTOurs1021.3±0.642.0±0.139.6±0.4 44.3±0.55046.8±0.256.5±0.152.9±0.6 58.6±0.365,PDJH:RRI9DOLGDWLRQ$FFXUDF\\30 35 40 45 50 55 60'LIIILW 'L7 5DQGRP102050 ,3&70 100", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "The", "figure_data": "LrLr L d w\\mL d w\\m 10-IPC 50-IPC 10-IPC 50-IPC ImageWoof ImageIDC----35.6±0.9 51.0±0.9 53.5±0.2 66.3±0.2✓---34.4±1.1 47.1±0.5 49.6±0.7 60.2±1.2-✓--37.4±0.4 49.5±1.0 54.5±1.2 65.0±0.8--✓-35.7±0.8 48.3±0.6 51.5±0.6 64.8±0.8---✓ 38.7±0.9 54.9±0.7 52.2±0.6 68.4±0.7✓-✓-38.3±0.5 54.9±0.4 53.3±0.5 66.8±0.5-✓-✓ 39.2±1.3 56.3±1.0 53.1±0.2 69.6±0.2", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": ", feature distribution [35, 45, 56, 58], predicted logits [46] or training trajectories [3, 11, 49] with original images.", "figure_data": "", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The training epoch number on different IPC settings for distilled dataset validation.", "figure_data": "", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Performance comparison on ImageNet-100. The best results are marked as bold.", "figure_data": "", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "The influence of diffusion denoising step number on the generation time of each image and the corresponding validation performance. Performance evaluated with ResNet-10 on Image-Woof. The best results are marked as bold.", "figure_data": "Denoising Step50100250Time (s)0.81.63.21039.2 ±1.3 35.7 ±0.7 39.6 ±0.9IPC20 50 7045.8 ±0.5 44.5 ±0.6 43.7 ±0.7 56.3 ±1.0 58.4 ±0.5 55.8 ±0.5 58.3 ±0.2 59.6 ±1.1 58.9 ±1.4100 64.5 ±0.2 63.3 ±0.7 62.8 ±0.6", "figure_id": "tab_9", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "The averaged generation quality evaluation of 10 classes each with 100 images in ImageWoof.", "figure_data": "Method FID Precision (%) Recall (%) Coverage (%)DM [56] 208.622.123.85.8DiT [33] 81.492.838.924.1DiT+Lr 85.493.238.124.6DiT+L d 81.190.446.828.3Ours full 81.592.445.328.6", "figure_id": "tab_10", "figure_label": "10", "figure_type": "table" } ]
Jianyang Gu; Saeed Vahidian; Vyacheslav Kungurtsev; Haonan Wang; Wei Jiang; Yang You; Yiran Chen
[ { "authors": "Shekoofeh Azizi; Simon Kornblith; Chitwan Saharia; Mohammad Norouzi; David J Fleet", "journal": "", "ref_id": "b0", "title": "Synthetic data from diffusion models improves imagenet classification", "year": "2023" }, { "authors": "Victor Besnier; Himalaya Jain; Andrei Bursuc; Matthieu Cord; Patrick Pérez", "journal": "", "ref_id": "b1", "title": "This Dataset Does Not Exist: Training Models from Generated Images", "year": "2020" }, { "authors": "George Cazenavette; Tongzhou Wang; Antonio Torralba; Alexei A Efros; Jun-Yan Zhu", "journal": "", "ref_id": "b2", "title": "Dataset distillation by matching training trajectories", "year": "2022" }, { "authors": "George Cazenavette; Tongzhou Wang; Antonio Torralba; Alexei A Efros; Jun-Yan Zhu", "journal": "", "ref_id": "b3", "title": "Generalizing dataset distillation via deep generative prior", "year": "2023" }, { "authors": "Justin Cui; Ruochen Wang; Si Si; Cho-Jui Hsieh", "journal": "NeurIPS", "ref_id": "b4", "title": "Dc-bench: Dataset condensation benchmark", "year": "2022" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "", "ref_id": "b5", "title": "ImageNet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Zhiwei Deng; Olga Russakovsky", "journal": "NeurIPS", "ref_id": "b6", "title": "Remember the Past: Distilling Datasets into Addressable Memories for Neural Networks", "year": "2022" }, { "authors": "Prafulla Dhariwal; Alexander Nichol", "journal": "NeurIPS", "ref_id": "b7", "title": "Diffusion Models Beat GANs on Image Synthesis", "year": "2021" }, { "authors": " Thinh T Doan", "journal": "IEEE Transactions on Automatic Control", "ref_id": "b8", "title": "Nonlinear two-time-scale stochastic approximation convergence and finite-time performance", "year": "2022" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly; Jakob Uszkoreit; Neil Houlsby", "journal": "ICLR", "ref_id": "b9", "title": "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale", "year": "2022" }, { "authors": "Jiawei Du; Yidi Jiang; Y F Vincent; Joey Tianyi Tan; Haizhou Zhou; Li", "journal": "", "ref_id": "b10", "title": "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation", "year": "2023" }, { "authors": "Lisa Dunlap; Alyssa Umino; Han Zhang; Jiezhi Yang; Joseph E Gonzalez; Trevor Darrell", "journal": "", "ref_id": "b11", "title": "Diversify your vision datasets with automatic diffusion-based augmentation", "year": "2023" }, { "authors": "Gabriele Eichfelder", "journal": "Computational Optimization and Applications", "ref_id": "b12", "title": "Scalarizations for adaptively solving multi-objective optimization problems", "year": "2009" }, { "authors": "Ronen Eldan; James R Lee", "journal": "Duke Mathematical Journal", "ref_id": "b13", "title": "Regularization under diffusion and anticoncentration of the information content", "year": "2018" }, { "authors": " Fastai", "journal": "", "ref_id": "b14", "title": "Fastai/imagenette: A smaller subset of 10 easily classified classes from imagenet, and a little more french", "year": "" }, { "authors": "Jianyang Gu; Kai Wang; Wei Jiang; Yang You", "journal": "", "ref_id": "b15", "title": "Summarizing stream data for memoryrestricted online continual learning", "year": "2023" }, { "authors": "Swaminathan Gurumurthy; Ravi Kiran Sarvadevabhatla; R Venkatesh; Babu", "journal": "", "ref_id": "b16", "title": "DeLiGAN: Generative Adversarial Networks for Diverse and Limited Data", "year": "2017" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b17", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Ruifei He; Shuyang Sun; Xin Yu; Chuhui Xue; Wenqing Zhang; Philip Torr; Song Bai; Xiaojuan Qi", "journal": "ICLR", "ref_id": "b18", "title": "Is Synthetic Data from Generative Models Ready for Image Recognition?", "year": "2022" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "NeurIPS", "ref_id": "b19", "title": "Denoising Diffusion Probabilistic Models", "year": "2020" }, { "authors": "Jang-Hyun Kim; Jinuk Kim; Seong Joon Oh; Sangdoo Yun; Hwanjun Song; Joonhyun Jeong; Jung-Woo Ha; Hyun Oh Song", "journal": "", "ref_id": "b20", "title": "Dataset condensation via efficient synthetic-data parameterization", "year": "2022" }, { "authors": "Diederik Kingma; Tim Salimans; Ben Poole; Jonathan Ho", "journal": "", "ref_id": "b21", "title": "Variational Diffusion Models", "year": "2021" }, { "authors": "Saehyung Lee; Sanghyuk Chun; Sangwon Jung; Sangdoo Yun; Sungroh Yoon", "journal": "", "ref_id": "b22", "title": "Dataset condensation with contrastive signals", "year": "2022" }, { "authors": "Joseph Lehec", "journal": "", "ref_id": "b23", "title": "Representation formula for the entropy and functional inequalities", "year": "2013" }, { "authors": "Ping Liu; Xin Yu; Joey Tianyi Zhou", "journal": "ICLR", "ref_id": "b24", "title": "Meta Knowledge Condensation for Federated Learning", "year": "2022" }, { "authors": "Yanqing Liu; Jianyang Gu; Kai Wang; Zheng Zhu; Wei Jiang; Yang You", "journal": "", "ref_id": "b25", "title": "Dream: Efficient dataset distillation by representative matching", "year": "2023" }, { "authors": "Noel Loo; Ramin Hasani; Alexander Amini; Daniela Rus", "journal": "NeurIPS", "ref_id": "b26", "title": "Efficient dataset distillation using random feature approximation", "year": "2022" }, { "authors": "Noel Loo; Ramin Hasani; Mathias Lechner; Daniela Rus", "journal": "", "ref_id": "b27", "title": "Dataset distillation with convexified implicit gradients", "year": "2023" }, { "authors": "Qi Mao; Hsin-Ying Lee; Hung-Yu Tseng; Siwei Ma; Ming-Hsuan Yang", "journal": "", "ref_id": "b28", "title": "Mode Seeking Generative Adversarial Networks for Diverse Image Synthesis", "year": "2019" }, { "authors": "Timothy Nguyen; Zhourong Chen; Jaehoon Lee", "journal": "ICLR", "ref_id": "b29", "title": "Dataset meta-learning from kernel ridge-regression", "year": "2021" }, { "authors": "Timothy Nguyen; Roman Novak; Lechao Xiao; Jaehoon Lee", "journal": "NeurIPS", "ref_id": "b30", "title": "Dataset distillation with infinitely wide convolutional networks", "year": "2021" }, { "authors": "Alexander Quinn; Nichol ; Prafulla Dhariwal", "journal": "", "ref_id": "b31", "title": "Improved Denoising Diffusion Probabilistic Models", "year": "2021" }, { "authors": "William Peebles; Saining Xie", "journal": "", "ref_id": "b32", "title": "Scalable diffusion models with transformers", "year": "2023" }, { "authors": "Nataniel Ruiz; Yuanzhen Li; Varun Jampani; Yael Pritch; Michael Rubinstein; Kfir Aberman", "journal": "", "ref_id": "b33", "title": "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation", "year": "2023" }, { "authors": "Ahmad Sajedi; Samir Khaki; Ehsan Amjadian; Lucy Z Liu; Yuri A Lawryshyn; Konstantinos N Plataniotis", "journal": "", "ref_id": "b34", "title": "DataDAM: Efficient Dataset Distillation with Attention Matching", "year": "2023" }, { "authors": "Mert Bulent Sariyildiz; Karteek Alahari; Diane Larlus; Yannis Kalantidis", "journal": "", "ref_id": "b35", "title": "Fake it Till You Make it: Learning Transferable Representations from Synthetic ImageNet Clones", "year": "2023" }, { "authors": "Ozan Sener; Silvio Savarese", "journal": "", "ref_id": "b36", "title": "Active learning for convolutional neural networks: A core-set approach", "year": "2018" }, { "authors": "Allahkaram Shafiei; Vyacheslav Kungurtsev; Jakub Marecek", "journal": "", "ref_id": "b37", "title": "Trilevel and multilevel optimization using monotone operator theory", "year": "2021" }, { "authors": "Jordan Shipard; Arnold Wiliem; Kien Nguyen Thanh; Wei Xiang; Clinton Fookes", "journal": "", "ref_id": "b38", "title": "Diversity Is Definitely Needed: Improving Model-Agnostic Zero-Shot Classification via Stable Diffusion", "year": "2023" }, { "authors": "Felipe Petroski Such; Aditya Rawal; Joel Lehman; Kenneth Stanley; Jeffrey Clune", "journal": "", "ref_id": "b39", "title": "Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data", "year": "2020" }, { "authors": "Peng Sun; Bei Shi; Daiwei Yu; Tao Lin", "journal": "", "ref_id": "b40", "title": "On the diversity and realism of distilled dataset: An efficient dataset distillation paradigm", "year": "2024" }, { "authors": "Yonglong Tian; Dilip Krishnan; Phillip Isola", "journal": "", "ref_id": "b41", "title": "Contrastive multiview coding", "year": "2020" }, { "authors": "Belinda Tzen; Maxim Raginsky", "journal": "COLT", "ref_id": "b42", "title": "Theoretical guarantees for sampling and inference in generative models with latent diffusions", "year": "2019" }, { "authors": "Saeed Vahidian; Mingyu Wang; Jianyang Gu; Vyacheslav Kungurtsev; Wei Jiang; Yiran Chen", "journal": "", "ref_id": "b43", "title": "Group distributionally robust dataset distillation with risk minimization", "year": "2024" }, { "authors": "Kai Wang; Bo Zhao; Xiangyu Peng; Zheng Zhu; Shuo Yang; Shuo Wang; Guan Huang; Hakan Bilen; Xinchao Wang; Yang You", "journal": "", "ref_id": "b44", "title": "Cafe: Learning to condense dataset by aligning features", "year": "2022" }, { "authors": "Kai Wang; Jianyang Gu; Daquan Zhou; Zheng Zhu; Wei Jiang; Yang You", "journal": "", "ref_id": "b45", "title": "Dim: Distilling dataset into generative model", "year": "2023" }, { "authors": "Tongzhou Wang; Jun-Yan Zhu; Antonio Torralba; Alexei A Efros", "journal": "", "ref_id": "b46", "title": "Dataset distillation", "year": "2018" }, { "authors": "Max Welling", "journal": "", "ref_id": "b47", "title": "Herding dynamical weights to learn", "year": "2009" }, { "authors": "Xindi Wu; Zhiwei Deng; Olga Russakovsky", "journal": "", "ref_id": "b48", "title": "Multimodal dataset distillation for image-text retrieval", "year": "2023" }, { "authors": "Enze Xie; Lewei Yao; Han Shi; Zhili Liu; Daquan Zhou; Zhaoqiang Liu; Jiawei Li; Zhenguo Li", "journal": "", "ref_id": "b49", "title": "Difffit: Unlocking transferability of large diffusion models via simple parameter-efficient fine-tuning", "year": "2023" }, { "authors": "Yuanhao Xiong; Ruochen Wang; Minhao Cheng; Felix Yu; Cho-Jui Hsieh", "journal": "", "ref_id": "b50", "title": "FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning", "year": "2023" }, { "authors": "Dingdong Yang; Seunghoon Hong; Yunseok Jang; Tianchen Zhao; Honglak Lee", "journal": "ICLR", "ref_id": "b51", "title": "Diversity-Sensitive Conditional Generative Adversarial Networks", "year": "2018" }, { "authors": "Zeyuan Yin; Eric Xing; Zhiqiang Shen", "journal": "NeurIPS", "ref_id": "b52", "title": "Squeeze, recover and relabel: Dataset condensation at imagenet scale from a new perspective", "year": "2023" }, { "authors": "Bo Zhao; Hakan Bilen", "journal": "", "ref_id": "b53", "title": "Dataset condensation with differentiable siamese augmentation", "year": "2021" }, { "authors": "Bo Zhao; Hakan Bilen", "journal": "", "ref_id": "b54", "title": "Synthesizing informative training samples with gan", "year": "2022" }, { "authors": "Bo Zhao; Hakan Bilen", "journal": "", "ref_id": "b55", "title": "Dataset condensation with distribution matching", "year": "2023" }, { "authors": "Bo Zhao; Konda Reddy Mopuri; Hakan Bilen", "journal": "ICLR", "ref_id": "b56", "title": "Dataset condensation with gradient matching", "year": "2021" }, { "authors": "Ganlong Zhao; Guanbin Li; Yipeng Qin; Yizhou Yu", "journal": "", "ref_id": "b57", "title": "Improved Distribution Matching for Dataset Condensation", "year": "2023" }, { "authors": "Yongchao Zhou; Ehsan Nezhadarya; Jimmy Ba", "journal": "NeurIPS", "ref_id": "b58", "title": "Dataset distillation using neural feature regression", "year": "2022" }, { "authors": "Yongchao Zhou; Hshmat Sahak; Jimmy Ba", "journal": "", "ref_id": "b59", "title": "Training on thin air: Improve image classification with generated data", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 116.21, 248.33, 109.84, 17.15 ], "formula_id": "formula_0", "formula_text": "z t = √ ᾱt z 0 + √ 1 -ᾱt ϵ," }, { "formula_coordinates": [ 3, 111.39, 321.85, 174.97, 12.69 ], "formula_id": "formula_1", "formula_text": "L simple = ||ϵ θ (z t , c) -ϵ|| 2 2 ,(1)" }, { "formula_coordinates": [ 3, 341.18, 363.08, 203.93, 30.44 ], "formula_id": "formula_2", "formula_text": "L r = arg max θ σ ẑθ (z t , c), 1 N B N B i=0 z i ,(2)" }, { "formula_coordinates": [ 3, 344.37, 530.46, 165.23, 15.69 ], "formula_id": "formula_3", "formula_text": "L r = arg max θ min m∈[N M ] σ (ẑ θ (z t , c), z m ) ." }, { "formula_coordinates": [ 4, 90.01, 130.5, 156.45, 15.69 ], "formula_id": "formula_4", "formula_text": "L d = arg min θ max d∈[N D ] σ (ẑ θ (z t , c), z d ) ." }, { "formula_coordinates": [ 4, 107.13, 294.56, 179.24, 9.65 ], "formula_id": "formula_5", "formula_text": "L = L simple + λ r L r + λ d L d ,(5)" }, { "formula_coordinates": [ 4, 55.09, 408.56, 233.25, 51.26 ], "formula_id": "formula_6", "formula_text": "min {θ (i) } i∈[N D ] λ d max i,j=1,..,N D σ ẑ(θ (i) ), ẑ(θ (j) ) + N D i=1 -λrQq,w∼µ σ ẑ(θ (i) ), w + ∥ẑ(θ (i) ) -z (i) 0 ∥ 2 ,(6)" }, { "formula_coordinates": [ 4, 55.09, 649.32, 231.27, 63.61 ], "formula_id": "formula_7", "formula_text": "min {θ (i) } i∈[N D ] max i,j=1,..,N D σ ẑ(θ (i) ), ẑ(θ (j) ) subj. to {θ (i) } ∈ arg max N D i=1 Qq,w∼µ σ ẑ(θ (i) ), w subj. to θ (i) ∈ arg min ∥ẑ(θ) -z (i) 0 ∥ 2 , ∀i ∈ [ND].(7)" }, { "formula_coordinates": [ 4, 380.3, 249.5, 160.94, 9.65 ], "formula_id": "formula_8", "formula_text": "dZ t = V (Z t )dt + dW t (8" }, { "formula_coordinates": [ 4, 541.24, 249.82, 3.87, 8.64 ], "formula_id": "formula_9", "formula_text": ")" }, { "formula_coordinates": [ 4, 313.84, 522.05, 231.27, 83.29 ], "formula_id": "formula_10", "formula_text": "min u(x,t) max i,j=1,..,N D σ Z u,(i) 1 , Z u,(j) 1 subj. to u ∈ arg max N D i=1 1 0 E Z (i) t Qq,w∼µ σ Z (i) t , w ds Z1 ∼ µ, dZ u,(i) t = u(Z u,(i) t , t)dt + dWt, t ∈ [0, 1]; Z0 = z0.(9)" }, { "formula_coordinates": [ 12, 50.11, 343.36, 230.32, 49.46 ], "formula_id": "formula_11", "formula_text": "u * (z, t) = ∇ log Q 1-t (f ) = ∇ log 1 (2π(1-t)) d/2 f (y) exp -1 2(1-t) ∥z -y∥ 2 dy would be such that if V (Z t ) = u * (z, t) in Eq. (" }, { "formula_coordinates": [ 12, 59.13, 413.05, 218.22, 26.29 ], "formula_id": "formula_12", "formula_text": "J u (z, t) := E 1 2 1 t ∥u s ∥ 2 ds -log f (Z u 1 )|Z u t = z ." }, { "formula_coordinates": [ 12, 99.01, 646.12, 150.92, 26.29 ], "formula_id": "formula_13", "formula_text": "J r (z, t) = 1 t Q q,w∼µ [σ (Z t , w)] ds," }, { "formula_coordinates": [ 12, 322.43, 183.78, 221.08, 13.95 ], "formula_id": "formula_14", "formula_text": "dZ u,(i) t = u(Z u,(i) t , t)dt + dW t , t ∈ [0, 1]; Z u,(i) 0 = z 0" }, { "formula_coordinates": [ 12, 350.25, 254.46, 155.05, 18.33 ], "formula_id": "formula_15", "formula_text": "J d (z, 1) = max i,j=1,..,N D σ Z u,(i) 1 , Z u,(j) 1" } ]
2024-02-27
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b46", "b50", "b10", "b20", "b49", "b45", "b36", "b13", "b49", "b10", "b14" ], "table_ref": [], "text": "Semantic segmentation is one of the fundamental computer vision tasks that aims to parse the semantic categories of each pixel in an image. Traditional semantic segmentation methods [5,30,47] assume that the semantic categories are closed-set and struggle to recognize the unseen semantic category during inference. To this end, recent Here, the speed is reported on a single NVIDIA A6000 GPU. Our proposed SED achieves an optimal trade-off in terms of speed and accuracy compared to existing methods in literature: SAN [51], CAT-Seg [11], OVSeg [28], DeOP [21], SimBaseline [50] and ZegFormer [13].\nworks have explored open-vocabulary semantic segmentation [2, 46,54] that aims to segment the pixels belonging to the arbitrary semantic categories. Recently, vision-language models, such as CLIP [37] and ALIGN [24], learn aligned image-text feature representation from millions of image-text paired data. The pre-trained vision-language models exhibit superior generalization ability to recognize open-vocabulary categories. This motivates a body of research works to explore using vision-language models for open-vocabulary semantic segmentation [13,28]. Initially, research works mainly adopt two-stage framework [14,28,50] to directly adapt vision-language models for open-vocabulary segmentation. Specifically, they first generate class-agnostic mask proposals and then adopt the pre-trained vision-language models to classify these proposals into different categories. However, such a two-stage framework uses two independent networks for mask generation and classification, thereby hampering computational efficiency. Further, it does not fully utilize the contextual information.\nDifferent to the aforementioned two-stage approaches, methods based on the single-stage framework directly extend a single vision-language model for open-vocabulary segmentation. Several methods remove the pooling operation in last layer of image encoder and generate pixel-level feature map for segmentation. For instance, MaskCLIP [57] removes the global pooling at last layer of the CLIP image encoder and uses the value-embeddings and textembeddings to directly predict pixel-level segmentation map. CAT-Seg [11] first generates pixel-level image-text cost map and then refines the cost map with spatial and class aggregation. While these approaches achieve favorable performance compared to their two-stage counterparts, we note their following limitations. First, both MaskCLIP [57] and CAT-Seg employ plain transformer ViT [15] as the backbone which suffers from weak local spatial information and low-resolution input size. To address those issues, CAT-Seg introduces an additional network to provide spatial information. However, this incurs extra computational cost. Second, the computational cost of CAT-Seg significantly increases with the larger number of open-vocabulary classes.\nTo address the aforementioned issues, we propose a simple yet effective encoder-decoder approach, named SED, for open-vocabulary semantic segmentation. Our proposed SED comprises a hierarchical encoder-based cost map generation and a gradual fusion decoder with category early rejection. The hierarchical encoder-based cost map generation employs hierarchical backbone, instead of plain transformer, to predict pixel-level image-text cost map. Compared to plain transformer, hierarchical backbone better preserves the spatial information at different levels and has a linear computational complexity with respect to the input size. Our gradual fusion decoder gradually combines the feature maps from different levels of hierarchical backbone and cost map for segmentation prediction. To increase the inference speed, we design a category early rejection scheme in the decoder that effectively predicts existing categories and rejects non-existing categories at the early layer of the decoder. Comprehensive experiments are conducted on multiple open-vocabulary semantic segmentation datasets, revealing the merits of the proposed contributions in terms of accuracy and efficiency. To summarize, we propose a simple yet effective open-vocabulary semantic segmentation approach with the following contributions.\n• We propose an encoder-decoder for open-vocabulary semantic segmentation comprising a hierarchical encoderbased cost map generation and a gradual fusion decoder. • We introduce a category early rejection scheme to reject non-existing categories at the early layer, which aids in markedly increasing the inference speed without any significant degradation in segmentation performance. For instance, it provides 4.7 times acceleration on PC-459. " }, { "figure_ref": [], "heading": "Vision-Language Models", "publication_ref": [ "b5", "b36", "b6", "b50", "b10" ], "table_ref": [], "text": "Vision-language models aims to learn the connection between image representation and text embeddings. Initially, the researchers developed the vision-language models [6,32,42] based on the pre-trained visual and language models, and explored to jointly fine-tune them on different downstream tasks with image-text pairs. In contrast, CLIP [37] In contrast, some methods adopt single-stage framework. LSeg [27] learns pixel-level image features guided by the pre-trained CLIP text embeddings. MaskCLIP [57] removes the self-attention pooling layer to generate pixellevel feature map and employs text-embeddings to predict final segmentation map. SAN [51] introduces a side adapter network along the frozen CLIP model to perform mask prediction and classification. FC-CLIP [53] employs a frozen convolutional CLIP to predict class-agnostic masks and employs mask-pooled features for classification. CAT-Seg [11] generates pixel-level cost map and refines the cost map for segmentation prediction. Our proposed method is inspired by CAT-Seg that fine-tuning image encoder through cost map does not degrade its open-vocabulary ability, but has significant differences: (1) Our SED is a simpler framework without additional backbone, and has a better performance and faster inference speed. (2) Our SED employs hierarchical image encoder to generate cost map and to perform skiplayer fusion, which can significantly improve performance and has linear computational cost with respect to input size.\n(3) In decoder, we introduce a simple large-kernel operation and gradual fusion for feature aggregation, and design a category early rejection strategy for acceleration without sacrificing performance." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "In this section, we describe our proposed encoder-decoder for open-vocabulary semantic segmentation, named SED. Fig. 2 shows the overall architecture of our proposed SED, which comprises two main components: a hierarchical encoder-based cost map generation and a gradual fusion decoder with category early rejection. In our hierarchical encoder-based cost map generation, we employ hierarchical image encoder and text encoder to generate pixellevel image-text cost map F cv and hierarchical feature maps F 2 , F 3 , F 4 for the decoder. Our gradual fusion decoder employs feature aggregation module (FAM) and skip-layer fusion module (SFM) to gradually combine pixel-level cost map F cv and hierarchical feature maps F 2 , F 3 , F 4 for generating high-resolution feature map F h . Based on F h , we employ an output layer to predict segmentation maps of different categories. In addition, a category early rejection (CER) strategy is used in the decoder to early reject nonexisting categories for boosting inference speed." }, { "figure_ref": [], "heading": "Hierarchical Encoder-based Cost Map", "publication_ref": [ "b36", "b10", "b10", "b37" ], "table_ref": [], "text": "Hierarchical encoder-based cost map generation (HECG) adopts the vision-language model CLIP [10, 37,39] to generate pixel-level image-text cost map. Specifically, we first employ hierarchical image encoder and a text encoder to respectively extract visual features and text embeddings. Then, we calculate pixel-level cost map between these two features. Existing methods such as MaskCLIP [57] and CAT-Seg [11] adopt the plain transformer as image encoder to generate pixel-level cost map. As discussed earlier, plain transformer suffers from relatively weak local spatial information and has quadratic complexity with respect to the input size. To address those issues, we propose to use hierarchical backbone as image encoder for cost map generation. Hierarchical encoder can better capture local information and has linear complexity with respect to the input size. The cost map generation is described as follow.\nGiven an input image I ∈ R H×W ×3 , we first utilize a hierarchical encoder ConvNeXt [29,39] to extract multiscale feature maps, denoted as F 2 , F 3 , F 4 , F 5 . These feature maps have strides of 4, 8, 16, 32 pixels with respect to the input size. To align the output visual features and text embeddings, an MLP layer is attached at the last feature map F 5 to obtain an aligned visual feature map F v ∈ R Hv×Wv×Dt , where D t is equal to the feature dimension of text embeddings, H v is H/32, and W v is W/32. Given an arbitrary set of category names {T 1 , .., T N }, we use the prompt template strategy [11,20,28] to generate different textual descriptions S(n) ∈ R P about category name T n , such as \"a photo of a {T n }, a photo of many {T n }, ...\". N represents the total number of categories, and P is the number of templates for each category. By fed S(n) to the text encoder, we obtain text embeddings, denoted as E = {E 1 , .., E N } ∈ R N ×P ×Dt . By calculating the cosine similarity [38] between visual feature map F v and text embeddings E, we obtain the pixel-level cost map F cv as\nF cv (i, j, n, p) = F v (i, j) • E(n, p) ∥F v (i, j)∥∥E(n, p)∥ ,(1)\nwhere i, j indicate the 2D spatial position, n represents the index of text embeddings, and p represents the index of templates. Therefore, the initial cost map F cv has the size of H v × W v × N × P . The initial cost map goes through a convolutional layer to generate the input feature map F l1 dec ∈ R Hv×Wv×N ×D of the decoder. For simplicity, we do not show F l1 dec in Fig. 2." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Gradual Fusion Decoder", "publication_ref": [ "b10", "b10" ], "table_ref": [ "tab_5" ], "text": "Semantic segmentation greatly benefits from highresolution feature maps. However, the cost map F cv generated by encoder has a relatively low resolution and high noise. Therefore, it is not beneficial to generate high-quality segmentation map by directly using cost map for prediction. To address this issue, we propose a gradual fusion decoder (GFD). GFD gradually generates high-resolution feature map F h by cascading two modules, including feature aggregation module (FAM) and skiplayer fusion module (SFM), into multiple layers. FAM aims to model the relationship between local regions and different classes, whereas SFM is designed to enhance the local details of feature maps using shallow features of hierarchical encoder. Feature Aggregation Module: Fig. 3(a) shows the design of the feature aggregation module (FAM) that has spatiallevel and class-level fusion. We first perform spatial-level fusion to model the relationship of local region. Prior works [29,35] have demonstrated that large-kernel convolutional operation is a simple but efficient structure to capture local information. Motivated by this, we perform spatial-level fusion employing large-kernel convolution [29]. Specifically, the input feature map F li dec goes through a depth-wise convolutional layer and an MLP layer. The depth-wise convolutional layer has a 9 × 9 depth-wise convolution and a layer-norm operation, and the MLP layer contains two linear layers and a GeLU layer. In addition, we use a residual connection in both convolutional and MLP layers. Following the spatial-level aggregation, we further apply a linear self-attention operation as in [11,25] along category dimension to perform class-level feature aggregation. The generated feature map by feature aggregation module (FAM) is represented as F f am dec . Skip-layer Fusion Module: The feature map F f am dec is spatially coarser, which lacks local detail information. In contrast, the shallow feature maps F 2 , F 3 , F 4 in hierarchical encoder contains rich detail information. To incorporate these local details for segmentation, we introduce the skip-layer fusion module to gradually combine the lowresolution feature map F f am dec with high-resolution feature maps F 2 , F 3 , F 4 . As shown in Fig. 3(b), we first upsample low-resolution feature map F f am dec by a factor of 2 using the deconvolutional operation. Then, we reduce the channel dimension of the corresponding high-resolution feature map F j , j ∈ 2, 3, 4 by a factor of 16 using the convolutional operation, and repeat the reduced feature map N times to have the same category dimension with F f am dec . Afterwards, we concatenate the upsampled feature map and the repeated feature map together. To fuse more information, we also upsample and concatenate the initial cost map F cv . Finally, we feed the concatenated feature map F cat dec through two convolutional layers to generate the output feature map F l(i+1) dec . As observed in [11], directly back-propagating the gradient to the image encoder degrades the performance of open-vocabulary semantic segmentation. Therefore, we stop gradient back-propagation directly from skip-layer fusion module to the image encoder.\nOur observation (see Table 4) reveals that, compared to plain transformer, hierarchical encoder with skip-layer fu- sion significantly improves the performance. This is likely due to that ,the hierarchical encoder is able to provide rich local information for segmentation, and the stopped gradient back-propagation avoids the negative impact on openvocabulary segmentation ability." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Category Early Rejection", "publication_ref": [], "table_ref": [], "text": "The computational cost of gradual fusion decoder is proportional to the number of semantic categories. When the number of categories is very large, the inference time significantly increases. In fact, most images only contain several semantic categories. As a result, majority of the inference time of the decoder is taken to calculate the features of the non-existing categories. To boost the inference speed, we introduce a category early rejection scheme to recognize these existing categories and reject non-existing categories at the early decoder layer. The feature maps corresponding to rejected categories are removed from current decoder layer, and the following decoder layer only considers the reserved categories.\nDuring training, as shown in Fig. 4(a), we add the auxiliary convolutional branch after each layer to respectively predict segmentation maps, which are supervised by ground-truths. To avoid the negative effect on model training, we stop their gradient back-propagation to the decoder.\nDuring inference, we employ a top-k strategy on segmentation maps to predict the existing semantic categories. Specifically, we select the top-k categories with maximum responses for each pixel and generate a union set of categories from all pixels, which is fed to next decoder layer. We observe that k = 8 can ensure that most existing categories is recognized. Fig. 4(b) shows the category early rejection during inference. We first predict segmentation maps from F l1 dec and employ the top-k strategy to select N l1 categories. Then, we remove the feature maps of nonselected categories and generate the output feature map F l1 cer ∈ R Hv×Wv×N l1 ×D . The generated feature map F l1 cer is fed to the decoder layer. Similarly, we generate the feature maps with fewer categories for the following layers. Therefore, most non-existing categories are rejected at early layer, which boosts the inference speed of the decoder. " }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b36", "b10" ], "table_ref": [], "text": "We adopt the pre-trained vision-language model CLIP [10, 37,39] as the base model, where the hierarchical backbone ConvNeXt-B or ConvNeXt-L is used as hierarchical image encoder. The feature dimension D t of text embeddings are 640 for ConvNeXt-B and 768 for ConvNeXt-L, the number of category templates P is 80, and the channel number of feature map F l1 dec is 128. We freeze the text encoder and only train the image encoder and gradual fusion decoder. We train our model on 4 NVIDIA A6000 GPUs with the mini-batch of 4 images. The optimizer AdamW is adopted with the initial learning rate of 2×10 -4 and the weight decay of 1×10 -4 . To avoid over-fitting on training set, the [11] and OVSeg [28], our proposed SED method does not use additional backbone or dataset." }, { "figure_ref": [], "heading": "Method mIoU Time", "publication_ref": [ "b49", "b10", "b50" ], "table_ref": [], "text": "SimBaseline [50] 20.5 316 OVSeg [28] 24.8 314 CAT-Seg [11] 27.2 362 SAN [51] 27 We report the results on A-150 with base and large models. Here, the inference time is reported on a single NVIDIA A6000 GPU.\nlearning rate of image encoder is multiplied by a factor of λ = 0.01. There are totally 80k training iterations. During training, we crop the input image with the 768 × 768 pixels. During inference, the input image is resized with the 768 × 768 pixels." }, { "figure_ref": [], "heading": "Comparisons With State-of-the-art Methods", "publication_ref": [ "b45", "b10", "b17", "b10", "b50", "b10", "b50", "b17", "b10", "b50", "b10" ], "table_ref": [ "tab_2", "tab_3" ], "text": "Here, we compare our proposed SED with some state-ofthe-art open-vocabulary semantic segmentation methods. Table 1 presents the results of different methods on all five test sets. It also shows the corresponding vision-language model (VLM), feature backbone, and training dataset. Most methods, except SPNet [46] and ZS3Net [2], are developed based on VLM. Some methods [11,28] employ additional feature backbones, while some methods [18,28] Most existing open-vocabulary semantic segmentation methods are developed on VLM with plain transformer ViT, including two-stage OVSeg [28] and single-stage CAT-Seg [11] and SAN [51]. In contrast, our proposed SED adopts hierarchical encoder ConvNeXt. When using the comparable image encoder ViT-B or ConvNeXt-B, our SED outperforms these methods on all five test sets. On PC-459, our SED outperforms OVSeg [28], CAT-Seg [11], and SAN [51] by 7.6%, 2.0%, and 6.0%. On A-150, our SED outperforms OpenSeg [18], CAT-Seg [11], and SAN [51] by 14.1%, 4.4%, and 4.1%. Moreover, compared to OVSeg and CAT-Seg, our SED does not require additional feature backbone. Compared to OVSeg and OpenSeg, our SED does not require additional dataset or annotation.\nWhen using the comparable image encoder ViT-L or ConvNeXt-L, our SED also achieves favourable performance on all five test sets. For example, on PC-459, our SED outperforms SAN [51], CAT-Seg [11], and FC-CLIP [53] by 5.5%, 2.2%, and 4.4%. Table 2 further shows the accuracy and speed comparison on A-150. Compared to most methods, our proposed SED has the results with both base and large models at fast speed. We also present a faster version (SED-fast) by downsampling the input size. Compared to SAN, our SED-fast is 1.9% better at similar speed with base model, and is 0.9% better and 1.8 times with large model." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_4", "tab_6", "tab_7" ], "text": "Here we perform ablation study to show the efficacy of our proposed method using ConvNeXt-B as image encoder. Impact of integrating different components: Table 3 shows the impact of integrating different components into the baseline. The baseline adopts original CLIP with plain transformer ViT and uses the cost map to predict segmentation map with skip-layer fusion. The baseline obtains the mIoU scores of 7.3%, 14.9%, and 23.7% on A-847, PC-459, and A-150. When using hierarchical encoder to replace plain transformer, it has the mIoU scores of 9.9%, 17.2%, and 28.2% on A-847, PC-459, and A-150, which outperforms the baseline by 2.6%, 2.3%, and 4.5%. When further integrating gradual fusion decoder, it has the mIoU scores of 11.2%, 18.6%, and 31.8% on A-847, PC-459, and A-150, which outperforms the baseline by 3.9%, 3.7%, and 8.1%. When further integrating category early rejection strategy into our method, it almost does not degrade performance but has a faster speed (see Table 7). Plain vs hierarchical encoder: connection, hierarchical encoder has a larger improvement. Therefore, different feature maps of hierarchical encoder provide rich local information for segmentation.\nAblation study on fine-tuning encoder in HECG: Table 5 presents ablation study on fine-tuning hierarchical encoder in HECG. In top part (a), we show the impact of different fine-tuning strategies. When we freeze all the layers in the encoder, it has the lowest performance on all five test sets. When we fine-tune all the layers in the encoder, it achieves the best results on all test sets. In bottom part (b), we present the impact of different scale factor λ of encoder learning rates. With the scale factor is 1×10 -2 , it has the best performance. A larger or smaller scale factor will degrade the performance in some degree. Therefore, we fine-tune hierarchical encoder using the scale factor of 1×10 -2 . Ablation study on GFD: Table 6 presents ablation study on different designs in gradual fusion decoder (GFD). Our gradual fusion decoder contains feature aggregation module (FAM) and skip-layer fusion module (SFM). We first perform some experiments on FAM. In (a), we show the impact of different large-kernel sizes. It has the best results using the kernel size of 9. We also observe that large-kernel operation is better and faster than Swin block. On PC-459, the mIoU has 0.4% improvement and the speed is 1.2 times faster. In (b), we present the impact of spatial-level and class-level feature aggregation. Both spatial-level and classlevel feature aggregation improve performance on five test sets. When combining them together, it has the best results. Afterwards, we perform some experiments on SFM. In Figure 5. Qualitative results. In the left part, we show some high-quality results, where our method can accurately classify and segment various categories. In the right-top part, we give some failure cases and corresponding ground-truths (GT). In the right-bottom part, we give one case in which our method can segment the cat that is not annotated in ground-truths (GT). Table 7. Impact of selecting top-k categories in CER. We show both mIoU and inference time (ms). Here, the inference time is reported on a single NVIDIA A6000 GPU.\n(c), we present the impact of fusing different feature maps in SFM. It can be seen that integrating different feature maps can significantly improve performance. In (d), we present the impact of gradient back-propagation in SFM. It has better performance without gradient back-propagation. Finally, we show the impact of different decoder layers in (e). Compared to using only one decoder layer, using three decoder layers has the best results. For example, improvement on PC-459.\nAblation study on CER: Table 7 presents the impact of selecting top-k categories in category early rejection (CER). The small number of k indicates selecting few categories to fed the decoder layer, which can accelerate inference speed but may sacrifice accuracy. For example, on A-847, when k is equal to 1, the speed is 7.1 times faster than using all categories, but the mIoU is 1.0% lower. When k = 8, we observed a slight improvement in performance, with a speed increase of about 4.7 times.\nQualitative results: Fig. 5 presents some qualitative results. The left part shows high-quality segmentation results. Our method is able to accurately segment various categories, such as palm, runway, and sand. The right-top part shows some failure cases. In first two rows, our method mistakenly recognizes the water as sea, the earth as rock, and the car as truck. In third row, our method mistakenly recognizes airplane and ignores windowpane. In addition, The right-bottom part shows that our method successfully segments cat that is ignored by the PC-59 ground-truth." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We propose an approach, named SED, for open-vocabulary semantic segmentation. Our SED comprises hierarchical encoder-based cost map generation and gradual fusion decoder with category early rejection. We first employ hierarchical encoder to generate pixel-level image-text cost map.\nBased on generated cost map and different feature maps in hierarchical encoder, we employ gradual fusion decoder to generate high-resolution feature map for segmentation. To boost speed, we introduce a category early rejection scheme into decoder to early reject non-existing categories. Experiments on multiple datasets reveal the effectiveness of our method in terms of both accuracy and speed.\nFuture work: Our model sometimes struggle on recognizing near-synonym categories as classes. In future, we will explore designing category attention strategy or using largescale fine-grained dataset to solve this challenge." } ]
Open-vocabulary semantic segmentation strives to distinguish pixels into different semantic groups from an open set of categories. Most existing methods explore utilizing pre-trained vision-language models, in which the key is to adopt the image-level model for pixel-level segmentation task. In this paper, we propose a simple encoder-decoder, named SED, for open-vocabulary semantic segmentation, which comprises a hierarchical encoder-based cost map generation and a gradual fusion decoder with category early rejection. The hierarchical encoder-based cost map generation employs hierarchical backbone, instead of plain transformer, to predict pixel-level image-text cost map. Compared to plain transformer, hierarchical backbone better captures local spatial information and has linear computational complexity with respect to input size. Our gradual fusion decoder employs a top-down structure to combine cost map and the feature maps of different backbone levels for segmentation. To accelerate inference speed, we introduce a category early rejection scheme in the decoder that rejects many no-existing categories at the early layer of decoder, resulting in at most 4.7 times acceleration without accuracy degradation. Experiments are performed on multiple open-vocabulary semantic segmentation datasets, which demonstrates the efficacy of our SED method. When using ConvNeXt-B, our SED method achieves mIoU score of 31.6% on ADE20K with 150 categories at 82 millisecond (ms) per image on a single A6000.
SED: A Simple Encoder-Decoder for Open-Vocabulary Semantic Segmentation
[ { "figure_caption": "Figure 1 .1Figure 1. Accuracy (mIoU) and speed (ms) comparison on A-150 and PC-459.Here, the speed is reported on a single NVIDIA A6000 GPU. Our proposed SED achieves an optimal trade-off in terms of speed and accuracy compared to existing methods in literature: SAN[51], CAT-Seg[11], OVSeg[28], DeOP[21], SimBaseline[50] and ZegFormer[13].", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Structure of gradual fusion decoder. The gradual fusion decoder (GFD) first performs feature aggregation (a) in both spatial and class levels, and then employs skip-layer fusion (b) to combine the feature maps from previous decoder layer and hierarchical encoder.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Structure of category early rejection. During training (a), we attach an auxiliary convolution after each decoder layer to predict segmentation maps supervised by ground-truths. During inference (b), we employ top-k strategy to predict existing categories and reject non-existing categories for next decoder layer.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "4. 1 .1Datasets and Evaluation MetricFollowing existing open-vocabulary semantic segmentation methods[11,51], we use the large-scale dataset COCO-Stuff[3] to train the model. In COCO-Stuff dataset, the training set contains about 118k densely-annotated images with 171 different semantic categories. With the trained model on COCO-Stuff, we conduct experiments on multiple widely-used semantic segmentation datasets (ADE20K [56], PASCAL VOC [16], and PASCAL-Context [33]) to demonstrate the effectiveness of the proposed SED and compare it with state-of-the-art methods in literature. ADE20K [56] is a large-scale semantic segmentation dataset. It contains 20k training images and 2k validation images. In open-vocabulary semantic segmentation task, there are two different test sets: A-150 and A-847. The test set A-150 has 150 common categories, while the test set A-847 has 847 categories. PASCAL VOC [16] is one of the earliest datasets for object detection and segmentation. There are about 1.5k training images and 1.5k validation images. The dataset contains 20 different object categories. In open-vocabulary semantic segmentation task, we name it as PAS-20. PASCAL-Context [33] is extended from the original PAS-CAL VOC dataset for semantic segmentation. In openvocabulary semantic segmentation, there are two different test sets: PC-59 and PC-459. The test set PC-59 has 59 categories, while the test set PC-459 has 459 categories. Evaluation metric: Following existing traditional and open-vocabulary semantic segmentation, we adopt mean Intersection over Union (mIoU) as evaluation metric. It is the averaged value of intersection over unions over all classes.", "figure_data": "", "figure_id": "fig_3", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "• Our proposed method, SED, achieves the superior performance on multiple open-vocabulary segmentation datasets. Specifically, the proposed SED provides a good trade-off in terms of segmentation performance and speed (see Fig.1). When using ConvNeXt-L, our proposed SED obtains mIoU scores of 35.2% on A-150 and 22.6% on PC-459.", "figure_data": "2. Related Work2.1. Semantic SegmentationTraditional semantic segmentation methods mainly containFCN-based approaches and transformer-based approaches.Initially, the researchers focused on FCN-based approaches.Long et al. [30] proposed one of the earliest fully-convolutional networks that fuses both deep and shallowfeatures for improved segmentation. Afterwards, manyFCN-based variants were proposed. Some methods utilizespatial pyramid network [5, 52] and encoder-decoder struc-", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Overall architecture of our proposed SED. We first employ hierarchical encoder (learnable) and text encoder (frozen) to generate pixel-level image-text cost map. Afterwards, we introduce a gradual fusion decoder to combine different feature maps of hierarchical encoder and cost map. The gradual fusion decoder stacks feature aggregation module (FAM) and skip-layer fusion module (SFM). In addition, we design a category early rejection (CER) in decoder to accelerate inference speed without sacrificing performance.", "figure_data": "////Hierarchical Encoder//FAMSFM//FAMSFM//FAMSFMOutput LayerCERCERCER\"a photo of a2×2×2×{tree}\"Figure 2.collects a large-scale image-text paired data from web-site and learns visual features via language supervision fromscratch. The learned CLIP on large-scale data has a superiorperformance on different zero-shot tasks. Instead of usingcleaned image-text paired data, ALIGN [24] learns visual-language representation from noisy image-text dataset. Toachieve this goal, ALIGN employs a dual-encoder struc-ture with contrastive loss, which achieves a good zero-shot performance on downstream tasks. Recently, Cherti etal. [10] conducted deep analysis on contrastive language-vision learning. Schuhmann et al. [39] built a billion image-text paired dataset for training large-scale vision-languagemodels.2.3. Open-Vocabulary Semantic SegmentationOpen-vocabulary semantic segmentation aims at segment-ing arbitrary categories. Initially, the researchers [2, 46, 54]explored to align visual features with pre-trained text em-beddings via a learned feature mapping. With the successof large-scale vision-language model CLIP [37], the re-searchers started to explore open-vocabulary semantic seg-", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison with state-of-the-art methods. We report the mIoU results on five widely used test sets for open-vocabulary semantic segmentation. Here, the best results are shown in bold and the second-best results are underlined. With comparable VLM model, Our proposed SED method achieves superior performance on all five test sets. Compared to CAT-Seg", "figure_data": "MethodVLMFeature backboneTraining datasetA-847 PC-459 A-150 PC-59 PAS-20SPNet [46]-ResNet-101PASCAL VOC---24.318.3ZS3Net [2]-ResNet-101PASCAL VOC---19.438.3LSeg [27]ViT-B/32ResNet-101PASCAL VOC-15----47.4LSeg+ [18]ALIGNResNet-101COCO-Stuff2.55.213.036.0-Han et al. [22]ViT-B/16ResNet-101COCO Panoptic [26]3.57.118.845.283.2GroupViT [48]ViT-S/16-GCC [40]+YFCC [44]4.34.910.625.950.7ZegFormer [13]ViT-B/16ResNet-101COCO-Stuff-1564.99.116.942.886.2ZegFormer [11]ViT-B/16ResNet-101COCO-Stuff5.610.418.045.589.5SimBaseline [50]ViT-B/16ResNet-101COCO-Stuff7.0-20.547.788.4OpenSeg [18]ALIGNResNet-101COCO Panoptic [26]+LOc. Narr. [36]4.47.917.540.1-DeOP [21]ViT-B/16ResNet-101cCOCO-Stuff-1567.19.422.948.891.7PACL [34]ViT-B/16-GCC [40]+YFCC [44]--31.450.172.3OVSeg [28]ViT-B/16ResNet-101cCOCO-Stuff+COCO Caption7.111.024.853.392.6CAT-Seg [11]ViT-B/16ResNet-101COCO-Stuff8.416.627.257.593.7SAN [51]ViT-B/16-COCO-Stuff10.112.627.553.894.0SED (Ours)ConvNeXt-B-COCO-Stuff11.418.631.657.394.4LSeg [27]ViT-B/32ViT-L/16PASCAL VOC-15----52.3OpenSeg [18]ALIGNEff-B7 [43]COCO Panoptic [26]+LOc. Narr. [36]8.111.526.444.8-OVSeg [28]ViT-L/14Swin-BCOCO-Stuff+COCO Caption9.012.429.655.794.5Ding et al. [14]ViT-L/14-COCO Panoptic [26]8.210.023.745.9-ODISE [49]ViT-L/14-COCO Panoptic [26]11.114.529.957.3-HIPIE [45]BERT-B [12]ViT-HCOCO Panoptic [26]--29.059.3-SAN [51]ViT-L/14-COCO-Stuff13.717.133.360.295.5CAT-Seg [11]ViT-L/14Swin-BCOCO-Stuff10.820.431.562.096.6FC-CLIP [53]ConvNeXt-L-COCO Panoptic [26]14.818.234.158.495.4SED (Ours)ConvNeXt-L-COCO-Stuff13.922.635.260.696.1", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison in terms of mIoU and inference time (ms).", "figure_data": "MethodmIoUTimeODISE [49]29.91989CAT-Seg [11]31.5433SAN [51]33.3117.532FC-CLIP [53]34.1285SED (ours)31.682SED (ours)35.298SED-fast (ours)29.432SED-fast (ours)34.264(a) With base model(b) With large model", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Impact of different modules in our SED. We show the results of integrating different modules into the baseline.", "figure_data": "use addi-", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparison of plain and hierarchical encoder. We employ ViT-B and ConvNeXt-B as plain encoder and hierarchical encoder, respectively.", "figure_data": "Image Encoder Skip-layer A-847 PC-459 A-150 PC-59 PAS-20Plainw/o with7.3 7.313.5 14.923.0 23.751.5 52.994.1 94.4Hierarchicalw/o with7.9 9.914.3 17.225.7 28.252.0 54.792.7 95.0StrategyA-847 PC-459 A-150 PC-59 PAS-20Freeze All9.415.328.649.777.2(a)Freeze L0-L211.218.330.654.991.5Freeze L0-L110.417.531.457.394.0Freeze L010.617.731.657.294.1Fine-tune All11.218.631.857.794.4Factor λA-847 PC-459 A-150 PC-59 PAS-20(b)0.00511.317.631.656.493.60.0111.218.631.857.794.40.0210.517.731.357.794.7", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation study on fine-tuning image encoder in HECG. We show the results of different fine-tuning strategies and different scale factors of encoder learning rates.", "figure_data": "", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Table 4 compares plain ViT-B or hierarchical ConvNeXt-B used as image encoder. Hierarchical encoder outperforms plain encoder with or without using skip-layer connection. When using skip-layer Ablation study on different designs in GFD. Gradually fusion decoder (GFD) contains feature aggregation module (FAM) and skip-layer fusion module (SFM). We first show the impact of different kernel sizes (a) and spatial-class aggregation (b) in FAM. Then, we give the impact of fusing different feature maps (c) and gradient back-propagation (d) in SFM. Finally, we show the impact of different decoder layers (e).", "figure_data": "Kernel SizeA-847 PC-459 A-150 PC-59 PAS-20(a)711.118.031.857.393.9911.218.631.857.794.41110.818.031.857.194.5AggregationA-847 PC-459 A-150 PC-59 PAS-20(b)Spatial-level10.117.628.854.894.4Class-level10.017.430.756.092.7Both11.218.631.857.794.4Feature FusionA-847 PC-459 A-150 PC-59 PAS-20(c)None7.914.325.752.092.7+F 2,3,411.117.932.057.594.2+F 2,3,4 +F cv11.218.631.857.794.4Gradient to F 2,3,4 A-847 PC-459 A-150 PC-59 PAS-20(d)w/o Stop10.618.031.757.393.9with Stop11.218.631.857.794.4Decoder LayerA-847 PC-459 A-150 PC-59 PAS-20(e)110.117.029.855.691.0211.118.331.857.393.8311.218.631.857.794.4", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" } ]
Bin Xie; Jiale Cao; Jin Xie; Fahad Shahbaz Khan; Yanwei Pang
[ { "authors": "Vijay Badrinarayanan; Alex Kendall; Roberto Cipolla", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b0", "title": "Segnet: A deep convolutional encoder-decoder architecture for image segmentation", "year": "2017" }, { "authors": "Maxime Vu; Matthieu Cord; Patrick Pérez", "journal": "", "ref_id": "b1", "title": "Zero-shot semantic segmentation", "year": "2019" }, { "authors": "Holger Caesar; Jasper Uijlings; Vittorio Ferrari", "journal": "", "ref_id": "b2", "title": "Cocostuff: Thing and stuff classes in context", "year": "2018" }, { "authors": "Jiale Cao; Yanwei Pang; Xuelong Li", "journal": "", "ref_id": "b3", "title": "Triply supervised decoder networks for joint detection and segmentation", "year": "2019" }, { "authors": "Liang-Chieh Chen; George Papandreou; Iasonas Kokkinos; Kevin Murphy; Alan L Yuille", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b4", "title": "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs", "year": "2017" }, { "authors": "Yen-Chun Chen; Linjie Li; Licheng Yu; Ahmed El Kholy; Faisal Ahmed; Zhe Gan; Yu Cheng; Jingjing Liu", "journal": "Springer", "ref_id": "b5", "title": "Uniter: Universal image-text representation learning", "year": "2020" }, { "authors": "Zhe Chen; Yuchen Duan; Wenhai Wang; Junjun He; Tong Lu; Jifeng Dai; Yu Qiao", "journal": "", "ref_id": "b6", "title": "Vision transformer adapter for dense predictions", "year": "2022" }, { "authors": "Bowen Cheng; Alex Schwing; Alexander Kirillov", "journal": "", "ref_id": "b7", "title": "Perpixel classification is not all you need for semantic segmentation", "year": "2021" }, { "authors": "Bowen Cheng; Ishan Misra; Alexander G Schwing; Alexander Kirillov; Rohit Girdhar", "journal": "", "ref_id": "b8", "title": "Masked-attention mask transformer for universal image segmentation", "year": "2022" }, { "authors": "Mehdi Cherti; Romain Beaumont; Ross Wightman; Mitchell Wortsman; Gabriel Ilharco; Cade Gordon; Christoph Schuhmann; Ludwig Schmidt; Jenia Jitsev", "journal": "", "ref_id": "b9", "title": "Reproducible scaling laws for contrastive language-image learning", "year": "2023" }, { "authors": "Seokju Cho; Heeseong Shin; Sunghwan Hong; Seungjun An; Seungjun Lee; Anurag Arnab; Paul Hongsuck Seo; Seungryong Kim", "journal": "", "ref_id": "b10", "title": "Cat-seg: Cost aggregation for open-vocabulary semantic segmentation", "year": "2007" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b11", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Jian Ding; Nan Xue; Gui-Song Xia; Dengxin Dai", "journal": "", "ref_id": "b12", "title": "Decoupling zero-shot semantic segmentation", "year": "2022" }, { "authors": "Zheng Ding; Jieke Wang; Zhuowen Tu", "journal": "", "ref_id": "b13", "title": "Openvocabulary panoptic segmentation with maskclip", "year": "2022" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b14", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Mark Everingham; Luc Van Gool; K I Christopher; John Williams; Andrew Winn; Zisserman", "journal": "International Journal of Computer Vision", "ref_id": "b15", "title": "The pascal visual object classes (voc) challenge", "year": "2010" }, { "authors": "Jun Fu; Jing Liu; Haijie Tian; Yong Li; Yongjun Bao; Zhiwei Fang; Hanqing Lu", "journal": "", "ref_id": "b16", "title": "Dual attention network for scene segmentation", "year": "2019" }, { "authors": "Golnaz Ghiasi; Xiuye Gu; Yin Cui; Tsung-Yi Lin", "journal": "", "ref_id": "b17", "title": "Scaling open-vocabulary image segmentation with image-level labels", "year": "2022" }, { "authors": "Jiaqi Gu; Hyoukjun Kwon; Dilin Wang; Wei Ye; Meng Li; Yu-Hsin Chen; Liangzhen Lai; Vikas Chandra; David Z Pan", "journal": "", "ref_id": "b18", "title": "Multi-scale high-resolution vision transformer for semantic segmentation", "year": "2022" }, { "authors": "Xiuye Gu; Tsung-Yi Lin; Weicheng Kuo; Yin Cui", "journal": "", "ref_id": "b19", "title": "Open-vocabulary object detection via vision and language knowledge distillation", "year": "2022" }, { "authors": "Cong Han; Yujie Zhong; Dengjie Li; Kai Han; Lin Ma", "journal": "", "ref_id": "b20", "title": "Zero-shot semantic segmentation with decoupled one-pass network", "year": "2023" }, { "authors": "Kunyang Han; Yong Liu; Jun Hao Liew; Henghui Ding; Jiajun Liu; Yitong Wang; Yansong Tang; Yujiu Yang; Jiashi Feng; Yao Zhao", "journal": "", "ref_id": "b21", "title": "Global knowledge calibration for fast open-vocabulary segmentation", "year": "2023" }, { "authors": "Zilong Huang; Xinggang Wang; Lichao Huang; Chang Huang; Yunchao Wei; Wenyu Liu", "journal": "", "ref_id": "b22", "title": "Ccnet: Criss-cross attention for semantic segmentation", "year": "2019" }, { "authors": "Chao Jia; Yinfei Yang; Ye Xia; Yi-Ting Chen; Zarana Parekh; Hieu Pham; Quoc Le; Yun-Hsuan Sung; Zhen Li; Tom Duerig", "journal": "", "ref_id": "b23", "title": "Scaling up visual and vision-language representation learning with noisy text supervision", "year": "2021" }, { "authors": "Angelos Katharopoulos; Apoorv Vyas; Nikolaos Pappas; Franc ¸ois; Fleuret ", "journal": "", "ref_id": "b24", "title": "Transformers are rnns: Fast autoregressive transformers with linear attention", "year": "2020" }, { "authors": "Alexander Kirillov; Kaiming He; Ross Girshick; Carsten Rother; Piotr Dollár", "journal": "", "ref_id": "b25", "title": "Panoptic segmentation", "year": "2019" }, { "authors": "Boyi Li; Q Kilian; Serge Weinberger; Vladlen Belongie; Rene Koltun; Ranftl", "journal": "", "ref_id": "b26", "title": "Language-driven semantic segmentation", "year": "2022" }, { "authors": "Feng Liang; Bichen Wu; Xiaoliang Dai; Kunpeng Li; Yinan Zhao; Hang Zhang; Peizhao Zhang; Peter Vajda; Diana Marculescu", "journal": "", "ref_id": "b27", "title": "Open-vocabulary semantic segmentation with mask-adapted clip", "year": "2023" }, { "authors": "Zhuang Liu; Hanzi Mao; Chao-Yuan Wu; Christoph Feichtenhofer; Trevor Darrell; Saining Xie", "journal": "", "ref_id": "b28", "title": "A convnet for the 2020s", "year": "2022" }, { "authors": "Jonathan Long; Evan Shelhamer; Trevor Darrell", "journal": "", "ref_id": "b29", "title": "Fully convolutional networks for semantic segmentation", "year": "2015" }, { "authors": "Chenyang Lu; Daan De Geus; Gijs Dubbelman", "journal": "", "ref_id": "b30", "title": "Contentaware token sharing for efficient semantic segmentation with vision transformers", "year": "2023" }, { "authors": "Jiasen Lu; Dhruv Batra; Devi Parikh; Stefan Lee", "journal": "", "ref_id": "b31", "title": "Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks", "year": "2019" }, { "authors": "Roozbeh Mottaghi; Xianjie Chen; Xiaobai Liu; Nam-Gyu Cho; Seong-Whan Lee; Sanja Fidler; Raquel Urtasun; Alan Yuille", "journal": "", "ref_id": "b32", "title": "The role of context for object detection and semantic segmentation in the wild", "year": "2014" }, { "authors": "Jishnu Mukhoti; Tsung-Yu Lin; Omid Poursaeed; Rui Wang; Ashish Shah; Philip Hs Torr; Ser-Nam Lim", "journal": "", "ref_id": "b33", "title": "Open vocabulary semantic segmentation with patch aligned contrastive learning", "year": "2023" }, { "authors": "Chao Peng; Xiangyu Zhang; Gang Yu; Guiming Luo; Jian Sun", "journal": "", "ref_id": "b34", "title": "Large kernel matters-improve semantic segmentation by global convolutional network", "year": "2017" }, { "authors": "Jordi Pont-Tuset; Jasper Uijlings; Soravit Changpinyo; Radu Soricut; Vittorio Ferrari", "journal": "", "ref_id": "b35", "title": "Connecting vision and language with localized narratives", "year": "2020" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "", "ref_id": "b36", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Ignacio Rocco; Relja Arandjelovic; Josef Sivic", "journal": "", "ref_id": "b37", "title": "Convolutional neural network architecture for geometric matching", "year": "2017" }, { "authors": "Christoph Schuhmann; Romain Beaumont; Richard Vencu; Cade Gordon; Ross Wightman; Mehdi Cherti; Theo Coombes; Aarush Katta; Clayton Mullis; Mitchell Wortsman", "journal": "", "ref_id": "b38", "title": "Laion-5b: An open large-scale dataset for training next generation image-text models", "year": "2022" }, { "authors": "Piyush Sharma; Nan Ding; Sebastian Goodman; Radu Soricut", "journal": "", "ref_id": "b39", "title": "Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning", "year": "2018" }, { "authors": "Robin Strudel; Ricardo Garcia; Ivan Laptev; Cordelia Schmid", "journal": "", "ref_id": "b40", "title": "Segmenter: Transformer for semantic segmentation", "year": "2021" }, { "authors": "Hao Tan; Mohit Bansal", "journal": "", "ref_id": "b41", "title": "Lxmert: Learning crossmodality encoder representations from transformers", "year": "2019" }, { "authors": "Mingxing Tan; Quoc Le", "journal": "", "ref_id": "b42", "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "year": "2019" }, { "authors": "Bart Thomee; David A Shamma; Gerald Friedland; Benjamin Elizalde; Karl Ni; Douglas Poland; Damian Borth; Li-Jia Li", "journal": "Association for Computing Machinery", "ref_id": "b43", "title": "Yfcc100m: The new data in multimedia research", "year": "2016" }, { "authors": "Xudong Wang; Shufan Li; Konstantinos Kallidromitis; Yusuke Kato; Kazuki Kozuka; Trevor Darrell", "journal": "", "ref_id": "b44", "title": "Hierarchical open-vocabulary universal image segmentation", "year": "" }, { "authors": "Yongqin Xian; Subhabrata Choudhury; Yang He; Bernt Schiele; Zeynep Akata", "journal": "", "ref_id": "b45", "title": "Semantic projection network for zero-and few-label semantic segmentation", "year": "2019" }, { "authors": "Enze Xie; Wenhai Wang; Zhiding Yu; Anima Anandkumar; Jose M Alvarez; Ping Luo", "journal": "", "ref_id": "b46", "title": "Segformer: Simple and efficient design for semantic segmentation with transformers", "year": "2021" }, { "authors": "Jiarui Xu; Shalini De Mello; Sifei Liu; Wonmin Byeon; Thomas Breuel; Jan Kautz; Xiaolong Wang", "journal": "", "ref_id": "b47", "title": "Groupvit: Semantic segmentation emerges from text supervision", "year": "2022" }, { "authors": "Jiarui Xu; Sifei Liu; Arash Vahdat; Wonmin Byeon; Xiaolong Wang; Shalini De Mello", "journal": "", "ref_id": "b48", "title": "Open-vocabulary panoptic segmentation with text-to-image diffusion models", "year": "2023" }, { "authors": "Mengde Xu; Zheng Zhang; Fangyun Wei; Yutong Lin; Yue Cao; Han Hu; Xiang Bai", "journal": "", "ref_id": "b49", "title": "baseline for openvocabulary semantic segmentation with pre-trained visionlanguage model", "year": "2022" }, { "authors": "Mengde Xu; Zheng Zhang; Fangyun Wei; Han Hu; Xiang Bai", "journal": "", "ref_id": "b50", "title": "Side adapter network for open-vocabulary semantic segmentation", "year": "2023" }, { "authors": "Maoke Yang; Kun Yu; Chi Zhang; Zhiwei Li; Kuiyuan Yang", "journal": "", "ref_id": "b51", "title": "Denseaspp for semantic segmentation in street scenes", "year": "2018" }, { "authors": "Qihang Yu; Ju He; Xueqing Deng; Xiaohui Shen; Liang-Chieh Chen", "journal": "", "ref_id": "b52", "title": "Convolutions die hard: Open-vocabulary segmentation with single frozen convolutional clip", "year": "2023" }, { "authors": "Hang Zhao; Xavier Puig; Bolei Zhou; Sanja Fidler; Antonio Torralba", "journal": "", "ref_id": "b53", "title": "Open vocabulary scene parsing", "year": "2017" }, { "authors": "Sixiao Zheng; Jiachen Lu; Hengshuang Zhao; Xiatian Zhu; Zekun Luo; Yabiao Wang; Yanwei Fu; Jianfeng Feng; Tao Xiang; Philip Hs Torr", "journal": "", "ref_id": "b54", "title": "Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers", "year": "2021" }, { "authors": "Bolei Zhou; Hang Zhao; Xavier Puig; Tete Xiao; Sanja Fidler; Adela Barriuso; Antonio Torralba", "journal": "International Journal of Computer Vision", "ref_id": "b55", "title": "Semantic understanding of scenes through the ade20k dataset", "year": "2019" }, { "authors": "Chong Zhou; Chen Change Loy; Bo Dai", "journal": "", "ref_id": "b56", "title": "Extract free dense labels from clip", "year": "2022" } ]
[ { "formula_coordinates": [ 4, 89.9, 334.35, 196.46, 23.22 ], "formula_id": "formula_0", "formula_text": "F cv (i, j, n, p) = F v (i, j) • E(n, p) ∥F v (i, j)∥∥E(n, p)∥ ,(1)" } ]
2023-11-27
[ { "figure_ref": [ "fig_0" ], "heading": "INTRODUCTION", "publication_ref": [ "b5", "b0", "b14", "b11", "b11" ], "table_ref": [], "text": "Rival image representation techniques coexist within computer graphics: bitmap images, which consist of pixel matrices, and vector images, depict sequences of artistically drawn shapes. Image Rasterization is particularly well understood and implemented, yet modern-day trends show a rising interest towards image vectorization for its inherent advantages, such as scale adaptability and resolution independence, which are vital for usage in contemporary interfaces [6]. However, vectorization poses numerous challenges, particularly in generating vector images or Scalable Vector Graphics (SVGs) [27] that are both human-readable and retain the semantic relevance of the original raster image.\nContrasting raster images, which are an assembly of ordered pixels, SVGs describe images through a set of parametric shape primitives which enables numerous benefits including smaller file sizes and resolutionindependence [22]. Despite being inherently more compact and flexible, ensuring that the SVG representations retain the semantics of the original image and at the same time are efficiently generable remains a difficult task. Figure 1 provides a visual representation of the fundamental components of an SVG. It illustrates how simple geometric shapes are coded and rendered into a composite image.\nRecent years have seen considerable progress in image vectorization, primarily through two technical advancements: implementing advanced generative models [1,15] and deploying sophisticated differentiable rendering methods [12,16]. However, these developments overlooked key characteristics of SVGs, such as their XML foundation and PNG Image Roof Sun Window Door House <rect x=\"50\" y=\"100\" width=\"100\" height=\"100\" fill=\"brown\"/> <rect x=\"90\" y=\"130\" width=\"20\" height=\"70\" fill=\"blue\"/> <rect x=\"70\" y=\"110\" width=\"20\" height=\"20\" fill=\"lightblue\"/> <rect x=\"110\" y=\"110\" width=\"20\" height=\"20\" fill=\"lightblue\"/> <circle cx=\"250\" cy=\"40\" r=\"30\" fill=\"yellow\"/> <polygon points=\"50,100 100,60 150,100\" fill=\"red\"/> <XML/> <polygon points=\"50,100 100,60 150,100\" fill=\"red\"/> <circle cx=\"250\" cy=\"40\" r=\"30\" fill=\"yellow\"/> <rect x=\"70\" y=\"110\" width=\"20\" height=\"20\" fill=\"lightblue\"/> <rect x=\"110\" y=\"110\" width=\"20\" height=\"20\" fill=\"lightblue\"/> <rect x=\"90\" y=\"130\" width=\"20\" height=\"70\" fill=\"blue\"/> <rect x=\"50\" y=\"100\" width=\"100\" height=\"100\" fill=\"brown\"/> their complexity beyond simple paths. Specifically, these methods typically yield SVGs that are unreadable, as they overly rely on certain strategies like the DiffVG, which generates SVGs with a fixed number of paths where many are unnecessarily obscured by others [12]. Moreover, these images frequently contain curves that overshoot the viewBox attribute's boundaries, resulting in a vectorized image that fails to capture the intended semantic information. These methods also spends quite a long time to generate SVGS for image preprocessing and optimization process. Therefore, a simple but practical method for capturing the essence of image vectorization is sought after in the field.\nIn this paper, we propose a novel approach, S 2 VG 2 , which leverages the capabilities of vision language models to overcome these challenges. S 2 VG 2 shows excellent performance in generating clean, understandable SVG representations from complex raster images, while maintaining their meaning from a comprehensible, human perspective. Efficacy of S 2 VG 2 is evaluated rigorously by benchmarking it against three key metrics: pixel and feature level similarity to the original bitmap; simplicity and readability of the resulting image in terms of the number of shapes and the parameters defining them; and the speed at which the vectorized image is generated.\nS 2 VG 2 's ability to encapsulate the semantic structure of intricate raster images into a neatly arranged collection of elementary shape primitives represents a crucial stride in the field of image vectorization aided by vision language models. This work paves the way for new methodologies in creating SVG icons and explores the potential of combining vision and language models for complex graphical tasks, thereby opening new possibilities in computer graphics and image processing fields.\nMoreover, it poses an innovative perspective on vectorization process by incorporating vision language models. S 2 VG 2 unravels a new realm of possibilities for pretrained vision large language models in visual representation and understanding. Our main contributions in this work can be summarized as follows:\n• We propose S 2 VG 2 , a novel method combined with a vision language model for intricate SVG generation, which can effectively retain the semantic coherence from raster images to SVGs and is capable of generating humanreadable SVGs. • S 2 VG 2 is extensively benchmarked and evaluated against performance metrics involving the vision quality of image vectorization, readability for language models and simplicity of the final SVG over previous methods. • We introduce a specialized dataset named SVG-SHAPE, designed for evaluating SVG generation methods. Each image in this dataset is a 384×384 RGB image depicting a 3×3 grid of objects, providing a standardized and challenging testbed for assessing vectorization algorithms." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Scalable Vector Graphics", "publication_ref": [ "b20", "b27" ], "table_ref": [], "text": "Vector graphics provide an alternative representation framework for visual content, delineating images as collections of parameterized shape primitives, such as polygons, circles, and rectangles [21] . This contrasts with conventional raster-based images composed of pixel grids. Notably, vector graphics leverage these primitives' geometric attributes, represented by coordinate sets defining contours and associated color values. This representation is widely supported by web browsers, requiring no specialized software or plugins for rendering. The succinctness and infinite scalability of vector graphics result in adaptability via easy adjustments in stroke or color parameters. The Scalable Vector Graphics format (SVG) takes advantage of this intrinsic characteristic of vector graphics by encoding images as XML-based text files, specifying geometric entities and their pertinent attributes. This XML structure empowers facile manipulation and editing, rendering SVG particularly versatile for web applications and graphic design tasks. In the context of this paper, we harness the potential of vision language models (VLMs) [28] to bridge the gap between pixel-based images and the SVG format. This strategic integration of VLMs ensures the fidelity of generated SVGs in terms of readability, simplicity, and preservation of semantic coherence." }, { "figure_ref": [], "heading": "Image Vectorization", "publication_ref": [ "b12", "b29", "b5", "b6", "b11" ], "table_ref": [], "text": "Rasterization and vectorization are dual problems in image processing. Various techniques, including algorithmic and machine learning-based methods [24], have been explored. Algorithmic approaches encompass mesh-based [8, 13,31] and curve-based methods [3,30], each with their own advantages and limitations. Machine learning-based methods, however, offer promising avenues for automated vector im-age generation [6] .\nExisting machine learning-compatible vectorization methods, such as Mang2Vec [23] and DVoTD [7], exhibit limitations. Mang2Vec struggles with color images and often generates SVGs with an excessive number of shapes, impacting both efficiency and semantic clarity. Other methods, like DiffVG [12] and LIVE [16], leverage iterative processes and optimization techniques, offering varying levels of accuracy and efficiency." }, { "figure_ref": [], "heading": "Image Captioning", "publication_ref": [ "b24", "b17", "b16", "b18" ], "table_ref": [], "text": "Image captioning [25] is the task of generating a coherent natural language description for an image, representing a vibrant and actively researched intersection of computer vision and natural language processing in recent years. The central objective of image captioning lies in the development of models capable of learning the intricate mapping between visual and textual domains, with the aim of producing meaningful and precise image descriptions.\nIn contemporary approaches to image captioning, notable efforts have been channeled towards elevating the quality of generated captions. These endeavors encompass the integration of attention mechanisms, reinforcement learning techniques, and the utilization of pretrained vision language models. Attention mechanisms [18] empower models to selectively focus on specific regions within an image during the caption generation process. Reinforcement learning [17], on the other hand, equips models with the capacity to refine their captioning skills by learning from feedback received during training.\nRecent advancements have introduced large language models (LLMs) like GPT-4 [19], which have showcased remarkable multi-modal capabilities. A prominent research focus has been the alignment of image encoders with these pretrained vision language models [9, 14, 32], representing a compelling category within the broader domain of pretrained vision language models (VLMs). These pretrained VLMs have the ability to encapsulate extensive visionlanguage correspondence knowledge, facilitating zero-shot predictions through the matching of embeddings derived from both images and texts." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "In this section, we present the intricacies of the S 2 VG 2 method, which leverages the combined strengths of vision and language models to enhance the generation of Scalable Vector Graphics (SVG). We explore the architecture that forms the backbone of our approach in Section 3.1, and later detail our strategic training approach designed to fine-tune and refine this architecture for optimal SVG generation in Section 3.2." }, { "figure_ref": [], "heading": "Architecture", "publication_ref": [ "b10", "b4" ], "table_ref": [], "text": "The architecture of our S 2 VG 2 system is designed to simplify and enhance the flexibility of SVG generation, employing the capabilities of pre-trained vision-language models. While our method is generally applicable to visionlanguage models, we use the BLIP [11] model in our work for its efficiency and widespread usage, to facilitate experiments and benchmarking. The model integrates a vision transformer [5] as a visual encoder and a pre-trained BERT [4] model with causal attention for language generation, forming a framework that leverages the strengths of both visual perception and language semantics. The visual transformer processes the image with its attention mechanism to efficiently capture the global context, and a BERT model adapted with causal attention is used as the text decoder, which conducts cross-attention to integrate information from the vision encoder, and generate tokens of the SVG code auto-regressively." }, { "figure_ref": [], "heading": "Loss Function", "publication_ref": [], "table_ref": [], "text": "Our primary objective is defined as training a machine learning model that predicts Scalable Vector Graphics (SVG) code from a given raster image. The generated SVG code should accurately restore the original image and concisely represent its semantic information, which is essential for reasoning tasks." }, { "figure_ref": [], "heading": "Training Stage", "publication_ref": [], "table_ref": [], "text": "We leverage the strong ability of the pretrained vision-language model in capturing the joint distribution of image and text, and fine-tune it on our SVG dataset to transfer its knowledge to the image vectorization task. In this stage, we align the model's prediction of the SVG code given the image with the ground truth SVG code.\nLet D denote the dataset of paired images and labels {(x i , y i )} N i=1 , where x i is the raster image and y i is the SVG code. y i is a sequence of l i tokens, y i = (y\n(1) i , y(2)\ni , . . . , y (li) i ). The task can be formulated as estimating the probability of the next token conditional on the raster image and the previous tokens.\nP (y (t) i |y (1) i , . . . , y (t-1) i , x i )(1)\nIn our model, the input image x i is processed by a vision transformer, which divides the image into M patches and encodes them as a sequence of embeddings \nV i = {v (1) i , v (2) i , ..., v (M ) i },\n(t-1) i , x i ) = f θ (y (t) i |v (1) i , ...,v (M ) i , y(1)\ni , . . . , y\n(t-1) i ) (2)\nwhere f θ (•|•) denotes the conditional generative model, and θ is its parameters. Our model is fine-tuned using the cross-entropy loss, averaged over all timesteps and samples in the dataset, as shown in the below equation:\nL(θ) = N i=1 li t=1 -log P (y (t) i |y (1) i , . . . , y (t-1) i , x i ) (3)\nInference Stage For text generation, the model's text decoder operates autoregressively, predicting the probability of all candidate tokens, and choose the one with the highest probability as the next token, to get the predicted SVG code ŷi .\nUsing the general knowledge in the pretrained model and the above fine-tuning process, our model can give good predictions on the shape configuration of the images, leading to decent performance in our experiments. However, the shape parameters predicted are by the model are probably not ideal, because large models are known to be less strong in quantitative reasoning . To improve the generation quality, we further conduct a step during the inference stage to refining\nWe use regular expressions to parse the SVG code ŷi into a set of shapes and corresponding parameters, denoted as S i , S i = {(s\n(j) i , ϕ (j) i )} ri j=1\n, where r i is the number of objects in the predicted SVG code, s (j) i denotes the shape type of the j-th object, chosen from pre-defined categories, and ϕ (j) i is the parameter that specifies its exact shape, size, and layout in the image.\nWe take S i as the input to DiffVG for rendering the image. DiffVG perform the rendering of image as a differentiable process, such that the pixels of the rendered image are differentiable w.r.t the shape parameters. This results in rendered image xi . This builds a computation graph integreted in auto-differentition frameworks.\nxi = DiffVG(S i )(4)\nAfter rendering the initial SVG code into an image, we optimize the shape parameters with gradient descent, based on the similarity between this rendered image and the actual test image. The loss function is shown below.\nL(ϕ) = α • ||x i -xi || 1 + β • ||x i -xi || 2(5)\nHere, α and β are weights for the ℓ 1 and ℓ 2 norms, respectively. This loss is used to refine the parameters of the SVG to better match the original image.\nThe described method demonstrates the efficacy of combining a vision-language model with DiffVG rendering for SVG generation. This approach not only captures the semantic structure of the input images but also refines the SVG output to ensure high fidelity and semantic coherence." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [], "table_ref": [], "text": "In this section, we provide a detailed description of the experimental datasets utilized to evaluate the performance of our S 2 VG 2 method. We introduce two distinct datasets: SVG-SHAPE and SVG-Transform." }, { "figure_ref": [], "heading": "SVG-SHAPE Dataset", "publication_ref": [], "table_ref": [], "text": "To thoroughly assess the capabilities of our S 2 VG 2 method, we have constructed a novel dataset, named SVG-SHAPE. Add the generated svg to the dataset D 12: end for 13: Rasterize all the SVGs into RGB images\nThe SVG-SHAPE dataset is specifically crafted for visual question answering tasks. It features images where diverse shapes, varying in color and size, are strategically placed on a 3x3 grid. Each image is associated with two types of tasks: one that requires inferring the presence of specific shapes, for example, \"Is there a green circle?\", and another that involves determining relative positions, such as \"Is there a red triangle positioned above a blue shape?\"" }, { "figure_ref": [], "heading": "SVG-Transform Dataset", "publication_ref": [], "table_ref": [], "text": "To enhance the capabilities in SVG generation and manipulation, we present the SVG-Transform dataset. This dataset, showcased in Figure 3b, is designed to challenge and assess SVG generation techniques in a more intricate scenario: performing geometric transformations on shapes. Details of the dataset construction are provided in Algorithm 2. Create an SVG element for the object using the generated shape, color, and size 8:\nAppend the SVG element to the svg document 9:\nAdd the generated svg to the dataset D 10: end for 11: Rasterize all the SVGs into RGB images" }, { "figure_ref": [], "heading": "Training Settings", "publication_ref": [ "b9", "b1" ], "table_ref": [], "text": "In our experimental setup, we conduct a comparative analysis between our innovative method and established baseline techniques in the realm of SVG generation. Training involves two distinct datasets: the SVG-SHAPE dataset, which includes 50,000 image-SVG pairs, and the more comprehensive SVG-Transform dataset, encompassing 500,000 pairs. These datasets provide a solid foundation for evaluating the efficacy and adaptability of S 2 VG 2 .\nThe architectural backbone of our model integrates the Vision Transformer (ViT) with BERT parameters adapted from the BLIP model. This model is pre-trained on a diverse array of datasets, including the Visual Genome (VG) [10], the SBU Captions Dataset [20], and the extensive Conceptual Captions 12M dataset [2]. This pre-training equips our model with a nuanced understanding of complex visualtextual relationships.\nFor the optimization process, we utilize the Adam optimizer, setting the learning rate to 1e-5 and the weight decay coefficient at 0.05. Training is conducted on a powerful computational setup comprising four NVIDIA A40 GPUs, and we maintain a batch size of 16." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_4", "fig_4", "fig_4", "fig_4", "fig_5", "fig_4" ], "heading": "Image Quality", "publication_ref": [ "b8", "b25" ], "table_ref": [ "tab_1" ], "text": "We evaluate the vectorization quality of S 2 VG 2 through both quantitative and qualitative analysis, focusing on the differences between the input targets and the SVG rendered images.\nThe baseline for comparison is established using the original input images (Figure 4a). SVGs generated by S 2 VG 2 (Figure 4b) exhibit high fidelity to these originals, accurately preserving shapes and colors. In comparison, images vectorized by the LIVE method (Figure 4c), with a predetermined path number of 10 and 50 iterations per layer (path), display noticeable distortions in shape geometry and color hues. The vectorization results from GPT-4V (Figure 4d) show the most significant deviations in shape accuracy and color representation. It can only reliably identify some relative positional relationships between objects within the image. Additionally, we vectorize images using DiffVG, setting the path number to 50 and iterating 500 times, a configuration chosen to balance the complexity of SVG tokens against each method's ability to render the images accurately.The readable SVG code produced by S 2 VG 2 for both the SVG-Transform and SVG-SHAPE datasets can be seen in Figure 5 , illustrating the method's effectiveness in generating interpretable and semantically coherent SVGs.\nThis visual comparison highlights the superior performance of S 2 VG 2 in preserving the integrity of the original designs during vectorization. It is important to note that the specific settings for LIVE and DiffVG, particularly in terms of path number and iteration count, play a crucial role in this evaluation. To more precisely quantify the quality of the vectorized images, we employ several image quality metrics for comparison: Learned Perceptual Image Patch Similarity (LPIPS) [29], Structural Similarity Index (SSIM) [26], as well as L1 and L2 norms. These metrics offer a comprehensive evaluation, shedding light on the perceptual and structural similarities, along with the error magnitudes, between the generated images and their original counterparts. The results in Table 1 reveal that S 2 VG 2 surpasses other methods in all these metrics. Notably, it records the lowest score in LPIPS, suggesting a higher perceptual resemblance to the original images. In terms of structural integrity, it achieves the highest SSIM score. Furthermore, its lower values in both L1 and L2 norms indicate a minimal pixellevel error throughout the vectorization process. Visual Question Answering (VQA) is a task that combines image understanding and language generation to answer questions about images. In the context of our work, VQA is employed to assess the readability of the generated SVG. As shown in Figure 4, the process starts with the conversion of raster images into SVG format using S 2 VG 2 . These SVGs, coupled with relevant questions, are then fed into large language models (LLMs) for VQA. The LLM's performance in answering these questions provides insight into the quality of the SVGs in terms of their readability and fidelity to the original images.\nMethod LPIPS ↓ SSIM ↑ L1 ↓ L2 ↓ S 2" }, { "figure_ref": [], "heading": "Readability Evaluation", "publication_ref": [], "table_ref": [], "text": "In our evaluation framework, we leverage GPT-4-32k, a variant of the GPT-4 model designed to handle inputs with extended token lengths up to 32,000 tokens. This adaptation is crucial as it accommodates the lengthy and complex SVGs typically generated by existing vectorization methods, which often exceed the standard token limits of conventional language models. As shown in Table 2, our method demonstrates superior readability performance. Its accuracy (acc) closely matches the Ground Truth (GT), the optimal benchmark in this context. Particularly in identifying the existence of specific shapes (acc1), our method reaches near-perfect accuracy.\nIn tasks involving the assessment of relative shape positioning (acc2), our method significantly outperforms the LIVE and DiffVG methods. This highlights S 2 VG 2 's effectiveness in preserving spatial relationships and contextual integrity of the original images during conversion.\nIn summary, the SVGs produced by S 2 VG 2 exhibit high readability in VQA tasks, attesting to the method's ability to generate SVGs that are visually precise and semantically detailed." }, { "figure_ref": [ "fig_9", "fig_9", "fig_9", "fig_9", "fig_9" ], "heading": "User Study", "publication_ref": [], "table_ref": [], "text": "We conduct a user study to assess the effectiveness of our SVG generation method, S 2 VG 2 , in comparison to other methods like LIVE and DiffVG. The study involves a total of 19 questions, divided into two parts.\nThe first part includes 9 questions focused on evaluating the participants' ability to infer the existence and relative positions of shapes within the SVGs. The objective is to determine how accurately users interpret and understand the spatial arrangement and presence of shapes in the SVGs. The second part comprises 10 questions where participants are asked to select the SVG output from each method that they find easiest to understand.\nThe user study, as shown in Figure 7, provides clear evidence of user preferences in SVG readability. A majority of 77.8% could easily interpret SVGs generated by S 2 VG 2 , as per Figure 7a, indicating a strong alignment with human cognitive processing. In stark contrast, Figure 7b shows that for the LIVE method, the majority (61.1%) could not understand the SVGs, and a significant portion (30.6%) answered incorrectly, indicating a considerable gap in the clarity and readability of the SVGs produced by this method. Figure 7c for DiffVG reflects a slightly better comprehension than LIVE, with 70.8% of responses indicating a lack of understanding. This could be attributed to the higher complexity and possibly less intuitive vector representations generated by DiffVG. Crucially, Figure 7d demonstrates a compelling preference among participants for S 2 VG 2 when choosing the easiest to understand SVG output. A significant 85.8% favored S 2 VG 2 , with the remaining percentages dispersed between the other two methods. This overwhelming preference underscores the superior readability and user-friendly nature of the SVGs generated by S 2 VG 2 .\nThe results from the user study conclusively show that S 2 VG 2 significantly enhances the readability of SVGs. They are more comprehensible compared to the alternatives, likely due to the reduced complexity and smaller file sizes of the SVGs generated by our method." }, { "figure_ref": [], "heading": "Complexities Reflected in SVG File Characteristics", "publication_ref": [], "table_ref": [ "tab_2", "tab_3" ], "text": "To further our understanding of the practical implications of SVG generation methods, we delve into the complexities as reflected in the file characteristics. We consider the file size and tokens number as a proxy for the complexity of the SVGs, which also has a direct impact on the interpretability and computational efficiency of these vector images. As depicted in Tables 3 and4, S 2 VG 2 demonstrates a clear advantage in generating more concise and less complex SVG files. This is evidenced by the significantly smaller mean file sizes and shorter token lengths. Such streamlined files suggest a higher degree of readability and simplified vector representations, which could lead to more efficient computation and easier user interpretation.\nAligned with the outcomes of our user study, the SVGs from S 2 VG 2 were preferred for their clarity and simplicity, underscoring the critical nature of an optimized vectorization process. In essence, the ability to generate SVGs that are both computationally efficient and user-friendly is paramount, particularly when scaling up for broader application in the field of computer graphics and beyond." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "This section delves into the implications of our findings from the S 2 VG 2 approach, highlighting its strengths, acknowledging potential limitations, and identifying opportunities for future research." }, { "figure_ref": [], "heading": "Implications of Findings", "publication_ref": [], "table_ref": [], "text": "The S 2 VG 2 model has shown considerable prowess in generating simplified yet semantically rich SVGs. Its success is partly attributed to the innovative use of a pre-trained Vision Transformer and a BERT model with causal attention mechanisms. This approach facilitates the production of SVGs that closely resemble the original images, making it potentially valuable in fields like graphic design automation and image editing.\nQuantitatively, the model distinguishes itself in the realm of image vectorization, achieving lower LPIPS scores and higher SSIM values relative to existing methods. These metrics are reflective of the model's adeptness in maintaining visual details and structural integrity, aspects that are vital for the practical application of SVGs in various domains." }, { "figure_ref": [], "heading": "Enhanced Image Understanding via SVG", "publication_ref": [], "table_ref": [], "text": "The S 2 VG 2 framework showcases the immense potential of SVGs as a medium for advanced image comprehension. It transforms raster images into SVGs, encapsulating detailed semantic information in a structured, vectorized format. This structured representation enables discrete manipulation of image components, thereby facilitating various tasks such as image editing, graphic design, and automated visual content analysis. The efficacy of this approach is rooted in the SVG's ability to depict complex images through scalable, editable, and descriptively rich vector paths. This capability distinctly sets SVGs apart from traditional bitmap images, which rely on fixed pixel grids." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "While S 2 VG 2 marks significant progress in SVG generation, it is important to acknowledge its inherent limitations, which could serve as focal points for future enhancements: • Token Limitation: The current implementation of S 2 VG 2 is constrained by a fixed token limit, which restricts its ability to process highly complex images. When dealing with images rich in details and elements, this limitation can lead to an oversimplification of the SVG output and result in a loss of nuanced details or an inability to fully capture the intricacy of the original raster image. • Challenges with Direct Backpropagation: The method encounters difficulties in directly backpropagating visual loss parameters through the network. While we employ differentiable rendering techniques, such as DiffVG, for an indirect approach to optimizing SVG parameters, the lack of a direct backpropagation pathway limits the model's precision. Enhancing this aspect could significantly improve the fidelity of the SVG outputs to their original images, making the vectorization process more accurate and efficient. • Handling Complex Images: One of the notable limitations of S 2 VG 2 is its performance with images that possess high levels of complexity. This includes images with intricate patterns, textures, or a large number of distinct visual elements. The current model may struggle to accurately and effectively vectorize such images, which can be critical for applications requiring detailed graphical representations. These limitations highlight areas for future research and development, underscoring the need for ongoing innovation and refinement in the field of SVG generation." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we introduce S 2 VG 2 , a novel approach for generating Scalable Vector Graphics (SVG) through the integration of vision-language models. S 2 VG 2 skillfully navigates the complexities of image vectorization, creating SVGs that are not only accurate but also preserve the semantic essence of the original images. This method marks a notable advance in making SVGs more understandable and manageable, enhancing their interpretability for human users.\nThrough rigorous experiments and comparative analyses, S 2 VG 2 is proven to significantly contribute to the field of simple SVG generation. It exemplifies the successful integration of vision and language models in tackling complex graphical tasks. Our findings open new pathways for exploration in SVG generation, hinting at exciting possibilities for automating graphic design and enhancing tools for visual reasoning." } ]
In the field of computer graphics, the use of vector graphics, particularly Scalable Vector Graphics (SVG), represents a notable development from traditional pixelbased imagery. SVGs, with their XML-based format, are distinct in their ability to directly and explicitly represent visual elements such as shape, color, and path. This direct representation facilitates a more accurate and logical depiction of graphical elements, enhancing reasoning and interpretability. Recognizing the potential of SVGs, the machine learning community has introduced multiple methods for image vectorization. However, transforming images into SVG format while retaining the relational properties and context of the original scene remains a key challenge. Most vectorization methods often yield SVGs that are overly complex and not easily interpretable. In response to this challenge, we introduce our method, Simple-SVG-Generation (S 2 VG 2 ). Our method focuses on producing SVGs that are both accurate and simple, aligning with human readability and understanding. With simple images, we evaluate our method with reasoning tasks together with advanced language models, the results show a clear improvement over previous SVG generation methods. We also conducted surveys for human evaluation on the readability of our generated SVGs, the results also favor our methods.
Beyond Pixels: Exploring Human-Readable SVG Generation for Simple Images with Vision Language Models
[ { "figure_caption": "Figure 1 .1Figure 1. Demonstrates an SVG representation of a simplified house, translating basic geometrical shapes into their corresponding SVG code elements and then into a visual structure, highlighting the vectorization process.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Overview of the Proposed S 2 VG 2 Approach.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .Algorithm 1 5 : 6 : 7 :31567Figure 3. Images samples from the SVG-SHAPE and SVG-Transform datasets.", "figure_data": "", "figure_id": "fig_2", "figure_label": "31567", "figure_type": "figure" }, { "figure_caption": "Algorithm 2 5 :25Construction of the SVG-Transform Dataset 1: Initialize an empty dataset D for SVG-Transform 2: for i in Dataset Size do 3: Initialize an empty SVG document svg 4: Generate a random shape (circle, rectangle, or triangle) Generate a random size for the object 6: Generate a random combination transform (translate, scale, skewX, skewY and rotate) 7:", "figure_data": "", "figure_id": "fig_3", "figure_label": "25", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Images from different image vectorization methods.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure5. The top image showcases a shape from the SVG-Transform dataset, with its SVG code generated by S 2 VG 2 , highlighting the human-readable format and precise transformation parameters. The bottom image, from the SVG-SHAPE dataset, presents an array of geometric shapes in vibrant colors. The accompanying SVG code, also produced by S 2 VG 2 , clearly outlines the shapes and their attributes, emphasizing the method's effectiveness in generating interpretable and semantically coherent SVGs.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Evaluation of the SVG Readability.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Table 2 .2Accuracy metrics for two VQA tasks. Acc1 represents the accuracy for identifying the existence of specific shapes within the SVG, while Acc2 relates to the accuracy of determining the relative position of shapes.", "figure_data": "", "figure_id": "fig_7", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "(a) Participant responses for S 2 VG 2 , showing a majority correctly identified the SVG representations. (b) Participant responses for LIVE, indicating a majority found the SVGs difficult to understand. (c) Participant responses for DiffVG,indicating a higher difficulty level to understand compared to LIVE. (d) Overall participant preference for the most understandable SVG output, with a significant leaning towards S 2 VG 2 .", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Images from different image vectorization methods.", "figure_data": "", "figure_id": "fig_9", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "including a [CLS] token for the global image feature. The model's image-grounded text decoder the takes embeddings of the previous tokens (y", "figure_data": "Generating Shape ConfigurationLanguage Modelling LossSVG XMLImage TokensVision TransformerImage EmbeddingLanguage ModelPNG imageEncodergenerated SVGPixel Comparison LossDiffVGSVG shapeFinetuning Shape ParametersRendered imageparameter(1) i , . . . , y(t-1) i) as input,and conduct cross attention over the visual tokens, to esti-", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison of Image Quality Metrics for Different Methods", "figure_data": "VG 20.00395 0.99501 0.92041 0.63227DiffVG 0.06075 0.97949 8.66592 3.39910LIVE0.01835 0.99418 15.30974 0.79269", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison of Mean File Sizes and Standard Deviations", "figure_data": "Method Mean Size (bytes) Std Dev (bytes)S 2 VG 2390.507112.957LIVE6776.223148.000DiffVG28354.357196.729Method Mean Length (tokens) Std Dev (tokens)S 2 VG 2186.7250.394LIVE3624.95186.126DiffVG15736.102131.635", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparison of Mean Token Lengths and Standard Deviations", "figure_data": "", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" } ]
Tong Zhang; Haoyang Liu; Peiyan Zhang; Yuxuan Cheng; Haohan Wang
[ { "authors": "Alexandre Carlier; Martin Danelljan; Alexandre Alahi; Radu Timofte", "journal": "", "ref_id": "b0", "title": "Deepsvg: A hierarchical generative network for vector graphics animation", "year": "2020" }, { "authors": "Soravit Changpinyo; Piyush Sharma; Nan Ding; Radu Soricut", "journal": "", "ref_id": "b1", "title": "Conceptual 12m: Pushing web-scale image-text pretraining to recognize long-tail visual concepts", "year": "2021" }, { "authors": "Wen Dai; Tao Luo; Jianbing Shen", "journal": "", "ref_id": "b2", "title": "Automatic image vectorization using superpixels and random walkers", "year": "2013" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b3", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly; Jakob Uszkoreit; Neil Houlsby", "journal": "", "ref_id": "b4", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2021" }, { "authors": "Maria Dziuba; Ivan Jarsky; Valeria Efimova; Andrey Filchenkov", "journal": "", "ref_id": "b5", "title": "Image vectorization: a review", "year": "2023" }, { "authors": "Oleg Vage Egiazarian; Alexey Voynov; Denis Artemov; Aleksandr Volkhonskiy; Maria Safin; Denis Taktasheva; Evgeny Zorin; Burnaev", "journal": "Springer International Publishing", "ref_id": "b6", "title": "Deep Vectorization of Technical Drawings", "year": "2020" }, { "authors": "Gerben Hettinga; Jose Echevarria; Jiri Kosinka", "journal": "", "ref_id": "b7", "title": "Efficient Image Vectorisation Using Mesh Colours", "year": "2021" }, { "authors": "Wonjae Kim; Bokyung Son; Ildoo Kim", "journal": "", "ref_id": "b8", "title": "Vilt: Visionand-language transformer without convolution or region supervision", "year": "2021" }, { "authors": "Ranjay Krishna; Yuke Zhu; Oliver Groth; Justin Johnson; Kenji Hata; Joshua Kravitz; Stephanie Chen; Yannis Kalantidis; Li-Jia Li; David A Shamma; Michael S Bernstein; Fei-Fei Li", "journal": "", "ref_id": "b9", "title": "Visual genome: Connecting language and vision using crowdsourced dense image annotations", "year": "2016" }, { "authors": "Junnan Li; Dongxu Li; Caiming Xiong; Steven Hoi", "journal": "", "ref_id": "b10", "title": "Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation", "year": "2022" }, { "authors": "Tzu-Mao Li; Michal Lukáč; Gharbi Michaël; Jonathan Ragan-Kelley", "journal": "ACM Trans. Graph. (Proc. SIG-GRAPH Asia)", "ref_id": "b11", "title": "Differentiable vector graphics rasterization for editing and learning", "year": "2020" }, { "authors": "Zicheng Liao; Hugues Hoppe; David Forsyth; Yizhou Yu", "journal": "IEEE Transactions on Visualization and Computer Graphics", "ref_id": "b12", "title": "A subdivision-based representation for vector image editing", "year": "2012" }, { "authors": "Haotian Liu; Chunyuan Li; Qingyang Wu; Yong Jae Lee", "journal": "", "ref_id": "b13", "title": "Visual instruction tuning", "year": "2023" }, { "authors": "Gontijo Raphael; David Lopes; Douglas Ha; Jonathon Eck; Shlens", "journal": "", "ref_id": "b14", "title": "A learned representation for scalable vector graphics", "year": "2019" }, { "authors": "Xu Ma; Yuqian Zhou; Xingqian Xu; Bin Sun; Valerii Filev; Nikita Orlov; Yun Fu; Humphrey Shi", "journal": "", "ref_id": "b15", "title": "Towards layerwise image vectorization", "year": "2022" }, { "authors": "Volodymyr Mnih; Koray Kavukcuoglu; David Silver; Alex Graves; Ioannis Antonoglou; Daan Wierstra; Martin Riedmiller", "journal": "", "ref_id": "b16", "title": "Playing atari with deep reinforcement learning", "year": "2013" }, { "authors": "Volodymyr Mnih; Nicolas Heess; Alex Graves; Koray Kavukcuoglu", "journal": "", "ref_id": "b17", "title": "Recurrent models of visual attention", "year": "2014" }, { "authors": " Openai", "journal": "", "ref_id": "b18", "title": "", "year": "2023" }, { "authors": "Vicente Ordonez; Girish Kulkarni; Tamara L Berg", "journal": "", "ref_id": "b19", "title": "Im2text: Describing images using 1 million captioned photographs", "year": "2011" }, { "authors": "Zhong-Ren Peng; Chuanrong Zhang", "journal": "Journal of Geographical Systems", "ref_id": "b20", "title": "The roles of geography markup language (gml), scalable vector graphics (svg), and web feature service (wfs) specifications in the development of internet geographic information systems (gis)", "year": "2004" }, { "authors": "Antoine Quint", "journal": "IEEE MultiMedia", "ref_id": "b21", "title": "Scalable vector graphics", "year": "2003" }, { "authors": "Hao Su; Xuefeng Liu; Jianwei Niu; Jiahe Cui; Ji Wan; Xinghao Wu; Nana Wang", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b22", "title": "Marvel: Raster gray-level manga vectorization via primitive-wise deep reinforcement learning", "year": "2023" }, { "authors": "Xingze Tian; Tobias Gunther", "journal": "IEEE transactions on visualization and computer graphics", "ref_id": "b23", "title": "A survey of smooth vector graphics: Recent advances in representation, creation, rasterization and image vectorization", "year": "2022" }, { "authors": "Oriol Vinyals; Alexander Toshev; Samy Bengio; Dumitru Erhan", "journal": "", "ref_id": "b24", "title": "Show and tell: A neural image caption generator", "year": "2015" }, { "authors": "Zhou Wang; Alan Bovik; Hamid Sheikh; Eero Simoncelli", "journal": "Image Processing, IEEE Transactions on", "ref_id": "b25", "title": "Image quality assessment: From error visibility to structural similarity", "year": "2004" }, { "authors": "", "journal": "World Wide Web Consortium", "ref_id": "b26", "title": "Scalable vector graphics", "year": "" }, { "authors": "Jingyi Zhang; Jiaxing Huang; Sheng Jin; Shijian Lu", "journal": "", "ref_id": "b27", "title": "Vision-language models for vision tasks: A survey", "year": "2023" }, { "authors": "Richard Zhang; Phillip Isola; Alexei A Efros; Eli Shechtman; Oliver Wang", "journal": "", "ref_id": "b28", "title": "The unreasonable effectiveness of deep features as a perceptual metric", "year": "2018" }, { "authors": "Shuang Zhao; Frédo Durand; Changxi Zheng", "journal": "IEEE Transactions on Visualization and Computer Graphics", "ref_id": "b29", "title": "Inverse diffusion curves using shape optimization", "year": "2018" }, { "authors": "Hailing Zhou; Jianmin Zheng; Lei Wei", "journal": "IEEE Transactions on Image Processing", "ref_id": "b30", "title": "Representing images using curvilinear feature driven subdivision surfaces", "year": "2014" }, { "authors": "Deyao Zhu; Jun Chen; Xiaoqian Shen; Xiang Li; Mohamed Elhoseiny", "journal": "", "ref_id": "b31", "title": "Minigpt-4: Enhancing vision-language understanding with advanced large language models", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 317.62, 534.46, 30.92, 14.07 ], "formula_id": "formula_0", "formula_text": "(1) i , y(2)" }, { "formula_coordinates": [ 3, 371.83, 585.34, 173.28, 14.07 ], "formula_id": "formula_1", "formula_text": "P (y (t) i |y (1) i , . . . , y (t-1) i , x i )(1)" }, { "formula_coordinates": [ 3, 308.86, 639.46, 236.25, 24.52 ], "formula_id": "formula_2", "formula_text": "V i = {v (1) i , v (2) i , ..., v (M ) i }," }, { "formula_coordinates": [ 4, 106.81, 320.08, 105.51, 32.04 ], "formula_id": "formula_3", "formula_text": "(t-1) i , x i ) = f θ (y (t) i |v (1) i , ...,v (M ) i , y(1)" }, { "formula_coordinates": [ 4, 241, 331.6, 45.36, 20.52 ], "formula_id": "formula_4", "formula_text": "(t-1) i ) (2)" }, { "formula_coordinates": [ 4, 62.82, 413.22, 223.54, 30.44 ], "formula_id": "formula_5", "formula_text": "L(θ) = N i=1 li t=1 -log P (y (t) i |y (1) i , . . . , y (t-1) i , x i ) (3)" }, { "formula_coordinates": [ 4, 118.05, 648.06, 53.86, 14.07 ], "formula_id": "formula_6", "formula_text": "(j) i , ϕ (j) i )} ri j=1" }, { "formula_coordinates": [ 4, 392.01, 380.37, 153.1, 9.79 ], "formula_id": "formula_7", "formula_text": "xi = DiffVG(S i )(4)" }, { "formula_coordinates": [ 4, 344.06, 451.46, 201.06, 9.79 ], "formula_id": "formula_8", "formula_text": "L(ϕ) = α • ||x i -xi || 1 + β • ||x i -xi || 2(5)" }, { "formula_coordinates": [ 6, 60.26, 631.99, 207.37, 21.38 ], "formula_id": "formula_9", "formula_text": "Method LPIPS ↓ SSIM ↑ L1 ↓ L2 ↓ S 2" } ]
10.1126/science.aaa8685
2023-11-28
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b36", "b42", "b56", "b10", "b21", "b67", "b29", "b32", "b59", "b71", "b10", "b18", "b11", "b26", "b25", "b34", "b52", "b60", "b4", "b41", "b35", "b46", "b50", "b68", "b24", "b38", "b49", "b13" ], "table_ref": [], "text": "\"Imagine a world where persuasive content is crafted so masterfully that it becomes nearly indistinguishable from human creation, yet is generated by machines at the click of a button. This groundbreaking study unveils the potential of leveraging large language models (LLMs) to generate compelling messages, and puts it to the ultimate test: can they outperform human-crafted tweets in captivating the minds of their audience?\" (Generated by GPT4 powered ChatGPT).\nRecent technological breakthroughs in neural network modeling have ushered in an era of artificial intelligence (AI), and new AI-based systems, such as OpenAI's ChatGPT, are gaining rapid adoption. Within this context, the term AI generally refers to a field of study that aims to understand and build intelligent machines (Luger, 2005;Mitchell, 2019;Russell and Norvig, 2021). The precise and specific definition of intelligence differs based on the approach taken by the researchers, but a common theme is that machines can exhibit cognitive capacities such as intelligence, language, knowledge, and reasoning, which had traditionally been limited to human brains. AI technologies like ChatGPT, or similar systems (e.g., Google's Bard, Meta's Llama) are driven by large language models (LLMs), a specific kind of transformer-based neural networks trained on massive 1 Corresponding Author. Email: limsue@msu.edu amounts of text. Importantly, these LLMs can not only process and categorize text, but they can also be used to generate text that mimics the flow of natural human language (Bubeck et al., 2023;Hirschberg and Manning, 2015;Wei et al., 2022).\nAs the above content from ChatGPT shows, LLMs have advanced to the point where even with minimum instructions, they can generate high-quality creative and informative content. This has opened ample opportunities for health researchers and practitioners to leverage LLMs to augment their work. For instance, within health communication, researchers have found that messages generated by LLMs were clear and informative, and exhibited argument strength (Karinshak et al., 2023;Lim and Schmälzle, 2023;Schmälzle and Wilcox, 2022;Zhou et al., 2023). As LLMs continue to expand in these capabilities (Bubeck et al., 2023), we can expect to see LLMs being used as tools for generating persuasive health messages. However, the rise of AI-generated content in the public communication environment raises the pressing question of how people react to AI as message creators.\nThough this is a relatively novel area of study, there are two relevant bodies of literature that we can draw from: interdisciplinary research about the general sentiment of hesitancy towards novel technologies and source effects research within communication research. It is well-documented that new technologies are often met with skepticism. Studies suggest a general sentiment of hesitancy (von Eschenbach, 2021) and mild to moderate aversion (Castelo and Ward, 2021;Jussupow et al., 2020) towards AI and computer algorithms more broadly. Also, when told that AI was involved in the creation of communicative content, there was some reporting of preference against or lower evaluation of that content (e.g., Airbnb profile writing; Jakesch et al. (2019); email writing; Liu et al. (2022); generated paintings; Ragot et al. (2020); music creation; Shank et al. (2023); translation of written content; Asscher and Glikson (2023)). Within health contexts especially, some studies show that people tend to prefer human practitioners over AIbased technologies like chatbots when receiving consultation about health conditions (Miles et al., 2021), citing lack of personalization and incompetence in addressing individual needs as some of the reasons for hesitancy (Longoni et al., 2019).\nSecond, source effects have been studied extensively in persuasion and communication. For instance, a plethora of literature has examined the influence of various aspects of the source, such as credibility, trustworthiness, and similarity, on people's attitudes and behavior (O'Keefe, 2015;Pornpitakpan, 2004;Wilson and Sherrell, 1993). With the advancement of technology, research also examined source effects in online settings (Ismagilova et al., 2020;Ma and Atkin, 2017). In addition, some of the most well-known theories within communication have examined cognitive mechanisms of source effects (ELM; Petty and Cacioppo (1986); HSM; Chen and Chaiken (1999)). Speaking broadly, the results from these studies show that people's thoughts about the source of the message shape how they evaluate the communication content from the source. Since there's already been evidence that LLMs have the potential to be powerful tools in expanding health communication theory and augmenting health campaign practice, it is thus important to investigate how people's perception of AI influences people's evaluation of health campaign messages. Moreover, it will also be critical to identify potential moderators of such influence.\nThis paper presents two experimental studies that shed light on the influence of source disclosure on the evaluation of prevention messages (see Figure 1). For the first study (study 1), we conducted an experimental study examining how source disclosure influenced people's evaluation of (in terms of effects perception) and preference for (in terms of ranking) prevention messages generated by a LLM compared to humans. Then a follow-up study (study 2) inspected how the influence of source disclosure varied on the basis of people's general attitudes toward AI. The findings from our studies have the potential to augment source effects theory within mediated health communication by highlighting how people's awareness of LLM's role in message generation influences their evaluation of the messages." }, { "figure_ref": [], "heading": "Study 1", "publication_ref": [], "table_ref": [], "text": "The goal of our first study was to examine whether source disclosure influenced people's evaluations of AI-generated messages as well as their preference for AI as the source of health information. We selected vaping prevention as a health " }, { "figure_ref": [], "heading": "Vaping Prevention as Context to", "publication_ref": [ "b65", "b8", "b44", "b2", "b14", "b16", "b9", "b33", "b45", "b64", "b29", "b32" ], "table_ref": [], "text": "Examine the Source Effects of AI The use of e-cigarettes (or vaping) has become a significant public health concern in the last decade, especially because of the high prevalence of e-cigarette use among youth (¡18 years of age) and young adults (18-24 years of age). About 20% of high school and 5% of middle school students reported vaping in 2020 (Wang et al., 2021); it was also estimated that about 15% of young adults were using e-cigarettes in 2020 (Boakye et al., 2022). Moreover, much of smoking and vaping-related marketing leverages the power of social media -or its capacity in disseminating information and ideas at a rapid speed through networks of people following one another (Nahon and Hemsley, 2013) -to influence audiences and promote tobacco products (Allem et al., 2017;Clark et al., 2016;Collins et al., 2019). To combat the detrimental effects of vaping, health researchers and professionals have invested significant efforts into developing and testing effective campaign messages (Boynton et al., 2023;Liu and Yang, 2020;Noar et al., 2020;Villanti et al., 2021), leading to guidelines for best practices (e.g., Vaping Prevention Resource, 2023). These efforts could be further augmented by the capabilities of LLMs in generating effective health messages (Karinshak et al., 2023;Lim and Schmälzle, 2023)." }, { "figure_ref": [], "heading": "The Current Study and Hypotheses", "publication_ref": [], "table_ref": [], "text": "The current study examined how human participants respond to vaping prevention messages that were either generated by AI vs. humans by either adding accurate source labels to the messages (source disclosed) or not adding any labels (source not disclosed)." }, { "figure_ref": [], "heading": "Effects Perception Ratings as Measure of Evaluation", "publication_ref": [ "b5", "b5", "b19", "b45", "b55", "b29", "b37", "b66" ], "table_ref": [], "text": "Within health campaigns research, one of the most used message evaluation metrics is perceived message effectiveness (PME). According to Baig et al. (2019), the PME measure tends to cover two major constructs, message perceptions and effects perception. Message perceptions refer to the extent the messages seem credible and understandable, while effects perception refers to how the message promotes self-efficacy and behavioral intention. Baig et al. (2019) developed an effects perception scale that focused on examining the extent the message does what it is intended. Existing research showed that effects perception was highly associated with health campaign outcomes such as risk beliefs, attitudes, and behavioral intentions (Grummon et al., 2022;Noar et al., 2020;Rohde et al., 2021), meanwhile in some cases message perceptions did not have significant associations with these outcomes. Thus, we used effects perception ratings as people's measure of the perceived effectiveness of the messages.\nSince the influence of source disclosure is a relatively new area of research, to our knowledge, only one study specifically examined how source disclosure would impact people's ratings of health campaigns messages at the time of writing this manuscript. Karinshak et al. (2023) conducted a set of three exploratory studies that used GPT3 to generate high-quality vaccination promotion messages. The third study, which manipulated source labels, found that prevention messages generated by GPT3 were rated higher in terms of perceived message effectiveness compared to those written by CDC when none of the messages were labeled. However, messages labeled as AIgenerated were rated lower in terms of argument strength and perceived message effectiveness compared to those labeled as created by CDC or those not labeled at all.\nOur study had a few aspects that differed from Karinshak et al. ( 2023) study. For one, our comparison of humangenerated messages were tweets, to take into account that much discussion about vaping occurs via social media platforms such as Twitter (Lyu et al., 2021;Wang et al., 2023). Second, we used effects perception measure specifically (rather than the general perceived message effectiveness) as a measure of message evaluation. Still, as existing literature suggests the existence of negative bias against AI-generated content, we posed the following hypothesis:\nHypothesis 1 (H1): People who know the source of the messages will rate AI-generated messages lower and humangenerated tweets higher than those who did not know the source." }, { "figure_ref": [], "heading": "Ranking as Measure of Preference", "publication_ref": [ "b1", "b0", "b3", "b30", "b51", "b47" ], "table_ref": [], "text": "In addition to effects perception ratings, rankings have also been used in existing research to gather information about pref-erence. Unlike ratings, rankings ask participants to order the messages from the best to the worst, using whatever criteria provided by the researcher and/or determined by the participants (Ali and Ronaldson, 2012). Rankings have been used extensively in the social sciences to gather data about constructs such as values (Abalo et al., 2007;Alwin and Krosnick, 1985), and attribute preferences (Lagerkvist, 2013). Within health communication, ranking measurement was used to examine people's preferences, including preferred health promotion icons (Prasetyo et al., 2021) and factors that influence demand for vaccinations (Ozawa et al., 2017). Though we do not know of any work that examined the influence of source disclosure on people's ranking of AI-generated vs. human-generated messages, we still predict that the negative bias against AIgenerated messages will be exhibited in the ranking of the messages. Thus, we pose the following hypothesis:\nH2: Those who know the source will prefer human-generated tweets vs. AI-generated prevention messages." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "We pre-registered our hypotheses and procedures at as.predicted." }, { "figure_ref": [], "heading": "Participants", "publication_ref": [ "b48", "b8", "b70" ], "table_ref": [], "text": "A total of 151 young adults (18-24 years of age) were recruited from two study pools and either received course credit (University study pool) or $2.80 (Prolific; Palan and Schitter (2018)) as compensation for participating in the study. We specifically selected the young adult age group because of the prevalence of vaping in this age demographic (Boakye et al., 2022). The local review board approved the study. We discarded the data from nine participants who did not complete the study or who completed the study in under five minutes, leaving 142 participants (m age = 20.78, sd age = 1.78]; 59% women) in the final dataset. Power calculations conducted a priori using the WebPower package in R (Zhang and Yuan, 2018) for a mixed ANOVA, with a medium effect size (f = 0.25) and significance level α = .05, showed that a total sample size of around 130 (about 65 per group) was enough to detect significant differences between groups at the power level of 0.8." }, { "figure_ref": [], "heading": "Experimental Messages: Human-and AI-generated", "publication_ref": [ "b32", "b57", "b63" ], "table_ref": [], "text": "We relied on previously published procedures to generate messages via a LLM, collect human-generated messages, and select 30 total messages (15 AI, 15 human) for the experiment (Lim and Schmälzle, 2023). For details, see Appendix A. For the sake of relevance and length, we briefly outline the process here.\nTo collect human-generated messages, we scraped vaping prevention tweets with hashtags #dontvape, #novaping, #quitvaping, #stopvaping, #vapingkills, and #vapingprevention using the snscrape package (JustAnotherArchivist, 2021) in Python. After cleaning the tweets, we randomly selected 15 tweets that had been retweeted at least once for the experiment.\nFor AI message generation and selection, we generated 500 total vaping prevention messages using the Bloom LLM, and then randomly selected a subset of 15 messages. Bloom is the largest open-source multilingual language model available (Scao et al., 2022). As mentioned in previous sections, Bloom, like GPT3, is powered by the transformer neural network, the most advanced ANN system currently available (Tunstall et al., 2022). Pre-trained with 1.5 TB of pre-processed text from 45 natural and 12 programming languages, Bloom allows for text generation using prompting (inputting the beginning part of the text and the language model completes the text) and a set of statistical parameters. We chose Bloom because of its free cost, full transparency of the training process and training data, and the ability to use it on a local machine via Jupiter notebooks or Google Colab without a special computing system called graphic processing unit (GPU), often required to run large computational tasks." }, { "figure_ref": [], "heading": "Experimental Procedure and Conditions", "publication_ref": [], "table_ref": [], "text": "The experiment was conducted online via Qualtrics. Once participants consented to the study, the young adult participants were randomly assigned to one of two groups: control and treatment (n control = 72, n treatment = 70). Then the survey asked the participants to rate each message on four perceived message effectiveness items and rank the 30 messages (15 AIgenerated vs. 15 tweets). The order of the two activities was randomized to control for order effects. The participants in the treatment condition read messages with source labels (e.g., \"AI-Generated Message: Nicotine in vapes. . . \", \"Human-Generated Tweet: Nicotine in vapes can. . . \") while those in the control condition were not provided the source labels. The source labels were true -no deception was used. Upon completing the main experiment, participants completed demographic questions and were debriefed about the study's purpose." }, { "figure_ref": [], "heading": "Measures", "publication_ref": [ "b5" ], "table_ref": [], "text": "Study 1 included two main measures. First, we adopted and updated UNC's perceived message effects, otherwise named effects perceptions (EP), scale (Baig et al., 2019) to fit vaping. The measure included the following four survey items: \"This message discourages me from wanting to vape,\" \"This message makes me concerned about the health effects of vaping,\" \"This message makes vaping seem unpleasant to me,\" and \"This message makes vaping seem less appealing to me.\" Participants rated each item on a likert scale from 1 (Strongly disagree) to 5 (Strongly agree). Second, for the ranking activity, we asked participants to rank the 30 messages from the best (1) to the worst (30) message by dragging each message to its rank. Finally, the participants answered demographic questions including age." }, { "figure_ref": [], "heading": "Data Analysis", "publication_ref": [ "b5", "b12", "b20", "b22", "b23", "b54" ], "table_ref": [], "text": "All analyses were conducted in R. To examine H1, the responses for the four items of the EP scale were averaged into a composite EP score for each participant; the last item about the appeal of vaping was excluded from the analysis to keep consistent with the results from Baig et al. (2019). Then we conducted a mixed ANOVA that examined the influence of source disclosure (disclosed vs. undisclosed) and the message source (AI vs. human) on EP.\nFor the statistical difference in the mean ranks between the groups, we first subtracted the mean ranks for the human messages from the mean ranks of the AI messages (AI -Human). Thus, if the human-generated messages were on average ranked higher than AI-generated messages, then this difference value would be negative, and vice versa. Using the stats package (Chambers et al., 1992), we conducted the Wilcoxon Rank Sum Test, the non-parametric alternative to a two-sample ANOVA. We used the alpha level of α = .05 to test for significance for both mixed ANOVA and Wilcoxon Rank Test.\nIn addition, we conducted a supplementary computational analysis. The purpose of this was to extract and compare various textual features of the AI-generated messages and human-generated tweets, showing that the two groups of messages could be adequately compared. The textual methods we used included semantic analysis, n-gram analysis, topic modeling, sentiment analysis, and assessment of readability metrics. These analyses were carried out using Python and R packages including spacy, textacy, vader, topicmodels, and the sentencetransformers (DeWilde, 2020; Grün and Hornik, 2011;Honnibal and Montani, 2020;Hutto and Gilbert, 2014;Reimers and Gurevych, 2019). For all computational analysis of tweets, we removed the hashtags used to scrape the tweets. We also removed the prompts from the AI-generated messages for all analyses except semantic analysis. See Appendix B for the results of the supplementary analysis." }, { "figure_ref": [], "heading": "Deviation from Pre-registration", "publication_ref": [ "b5" ], "table_ref": [], "text": "While the main ideas from the pre-registration remained the same, we altered some of the details of the pre-registration. First, the pre-registration only included the data collection plan for the University sample. We decided to gather additional data from Prolific to make the results more generalizable beyond the University sample and to increase the sample size. Second, we decided to aggregate only the first three out of the four items for the EP measure to be more consistent with the existing literature (Baig et al., 2019). Finally, for the rank data, we used the Wilcoxon test, which is a two-sample extension of the Kruskal-Wallis test." }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_1", "tab_2" ], "text": "First, we present the results from the mixed ANOVA, which tested the influence of source disclosure on message ratings (see Table 1). We find that there was a significant interaction effect between source disclosure and the message source (F(1,140) = 4.73, η 2 = .0018, p = .031). As illustrated in Figure 2, this interaction was due to the fact the difference between the AI-generated and human-generated messages was smaller when the source was disclosed compared to when it was not disclosed. This interaction qualified a main effect of message source (F(1,140) = 10.25, η 2 = .0039, p = .0017), which indicated overall lower ratings for human-generated compared to AI-generated messages. Follow-up comparisons conducted separately for each message source (i.e. AI-generated and human-generated messages) revealed that the EP ratings for AIgenerated messages were slightly lower and ratings for humangenerated messages were slightly higher when the source was disclosed, yet this difference was not statistically significant (t(133) = .82; p > .05 for AI-generated messages; t(125) = -.23; p > .05 for human-generated messages; see Table 2). Thus, H1 was partially supported. To test H2, we compared the difference in the mean ranks of AI and human-generated messages (AI mean rank -Human mean rank; see Figure 3) using the Wilcoxon Sum Rank Test. For the rank activity, the lower quantitative value represented a higher relative quality rank, with 1 representing the best message. Thus, the smaller differences in rank suggested a lower quantitative value for AI mean rank, hence a higher preference for AI-generated messages. We found that the median difference in rank for participants who knew the source, Mdn = -.6, was slightly higher than the median difference in rank for participants who did not know the source, Mdn = -.87, though this difference was not statistically significant (W = 2652.5, p > .05)." }, { "figure_ref": [], "heading": "Study 1 Discussion", "publication_ref": [ "b29" ], "table_ref": [], "text": "Study 1 examined how disclosing the source of a message as coming from an AI (vs. humans) influenced the evaluations of the messages and the preferences for the message source. Our H1 was partially supported -source disclosure significantly decreased the ratings difference between AI and human-generated messages. However, follow-up mean comparisons by message source showed that ratings stayed statistical consistent between non source disclosure and source disclosure conditions. This finding is generally aligned with findings from Karinshak et al. (2023). However, our H2, which addressed the ranking task that required participants to make an active selection to express their preferences about messages, was not supported. This could have occurred for many reasons, one of which is that ranking all 30 messages may have required too much cognitive effort. To further inspect source effects of AI-generated messages, we conducted a follow-up study (Study 2), examining individual differences that could boost or buffer the effects of source disclosure. For instance, participants could vary in their attitudes about the use of and general sentiment towards AI, which in turn could influence their judgments of AI-generated content. Thus, we examined attitudes towards AI as a potential factor in Study 2." }, { "figure_ref": [], "heading": "Study 2", "publication_ref": [], "table_ref": [], "text": "Study 2 replicated the source disclosure manipulation from Study 1 with a few modifications. First, we assessed people's preference for messages via message selection (top 5 out of 30) rather than the ranking task to decrease the participants' cognitive burden of comparing all 30 messages. Next, we examined how the influence of source disclosure on the evaluation and selection of AI-generated messages varied by the level of negative attitudes towards AI." }, { "figure_ref": [], "heading": "Negative Attitudes Towards AI as Moderators", "publication_ref": [ "b6" ], "table_ref": [], "text": "Schepman and Rodway (2023) created a scale about general attitudes toward AI (GAAIS). A major part of the measure is based on the concept of trust in the capabilities and the uses of AI. The paper showed that GAAIS was associated with psychological features such as the Big Five personality, showing that it can be used to represent various individual differences that could exist when processing messages generated by AI. For example, Bellaiche et al. (2023) examined the association between attitudes towards AI and people's judgments of art labeled as AI-created or human-created. In this study, we adopted the negative attitudes towards AI subscale, which included people's concerns about and negative sentiment towards AI, as a moderator. Adopting the negative attitudes towards AI subscale of GAAIS, we posited the following hypotheses:\nH3: Negative attitude toward AI will moderate the influence of source disclosure on the evaluation of prevention messages.\nH4: Negative attitude toward AI will moderate the influence of source disclosure on the preference for AI as the message source." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Participants", "publication_ref": [ "b48" ], "table_ref": [], "text": "As with study 1, we used two platforms to recruit participants, one administered by the university and the other by Prolific. A total of 216 adults recruited from the study pools either received course credit (University study pool) or $2.80 (Prolific; Palan and Schitter (2018)) as compensation for participating in the study. To generalize the findings of Study 1 beyond young adults, we extended the participant pool for the Prolific platform to all adults . The local review board approved the study. We discarded the data from 33 participants who did not complete the study, completed the study in under five minutes, and failed to pass the manipulation check questions, leaving 183 participants (m age = 33.83, sd age = 14.42; 56% women) in the final dataset." }, { "figure_ref": [], "heading": "Experimental Procedure, Measures, and Data Analysis", "publication_ref": [ "b58" ], "table_ref": [], "text": "The same 30 messages from Study 1 were tested in the main study. The experiment followed the same procedure as study 1 (n control = 94, n treatment = 89) with the following modification: instead of ranking the messages, we asked participants to select the 5 best messages from the pool instead of having them rank all messages. This was done because, in campaign practice, the best-in-show messages are chosen from a larger pool of candidates. Moreover, having participants and all messages is rather taxing and we expected better compliance with a more focused task. Upon completing the main experiment, participants answered background and demographics questions that included questions about their attitudes towards AI.\nThe negative attitude toward AI scale asked people to rate 8 items related to negative attitudes (e.g., \"I shiver with discomfort when I think about future uses of Artificial Intelligence\") from a scale of 1 (strongly disagree) to 5 (strongly agree) (Schepman and Rodway, 2023). The overall mean was 3.04, with a standard deviation of .80. The demographics questions stayed the same as in study 1.\nAll analyses were conducted in R. First, we calculated the average score for negative attitudes towards AI. To examine how the influence of source disclosure on EP of AI vs. humangenerated messages differed by the extent of negative attitude (H3), we fitted a mixed effects linear regression model. The models allowed for the intercept to vary by participant, to take into consideration of the repeated measures design. To examine how the effect of source disclosure on source preference differed by negative attitude (H4), we first calculated how many AI-generated messages were selected (out of 3), and then fitted a Poisson regression model." }, { "figure_ref": [ "fig_3", "fig_4" ], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_3", "tab_4", "tab_5", "tab_6" ], "text": "For the EP ratings, we found that there was a significant three-way interaction of source disclosure, message source (AI vs. Human), and the extent of having a negative attitude toward AI (b = -.14, SE = .047, p = .0029; see Table 3). In other words, the influence of source disclosure on the evaluation of the AIgenerated messages vs. human-generated messages differed by the level of negative attitudes towards AI. A deeper inspection of the moderation effect shows that for both AI-generated and human-generated messages, source disclosure led to slightly higher EP ratings among participants with lower levels of negative attitudes towards AI, whereas it led to slightly lower EP ratings among those with higher levels of negative attitudes towards AI (see Table 4). Interestingly, when the source was disclosed, the more negative attitudes the participants had towards AI, the higher they rated the AI-generated messages, whereas the ratings of human-generated messages generally stayed flat (see Figure 4). Table 5 shows the results for messages selection. There was no moderation effect of negative attitudes toward AI (b = -.042, SE = .12, p > .05), and H4 was not supported. A deeper inspection of the results showed that those who knew the source were likely to select less number of AI-generated messages compared to those who did not know the source for those with moderate level of negative attitudes towards AI (see Figure 5 and Table 6). " }, { "figure_ref": [], "heading": "Study 2 Discussion", "publication_ref": [], "table_ref": [], "text": "Study 2 examined whether negative attitudes towards AI moderated the influence of source disclosure on the evaluation of and preference for AI-generated vs. human-generated messages. For EP ratings, having a negative attitude toward AI emerged as a significant moderator, supporting H3. Specifically, at lower levels of negative attitudes towards AI, the ratings for AI-generated and human-generated messages were slightly higher when the source was disclosed vs. not disclosed (albeit not statistically significant for AI-generated messages), whereas the opposite was observed at higher levels of negative attitudes towards AI (not statistically significant). This re- However, for the participants who knew the source, the EP ratings for AI-generated messages compared to those for human-generated messages increased with the level of negative attitudes towards AI. While deeper inspection is needed to fully unpack this phenomenon, one explanation could be \"source involvement\". In other words, the level of negative attitudes towards AI could have determined how closely they examined the messages: those with greater levels of negative attitudes towards AI could have paid closer attention to the content of the messages compared to those with less negative attitudes towards AI. Another explanation is that the negative attitudes towards AI measure could be a bit too general. In the case of AI message generation, people are heavily involved in the process. In the case of AI message generation, people are heavily involved in the process (see Appendix A for details of how the messages used for this study were crafted). Thus, it is possible that people's general attitudes towards AI did not play as large of a role as we expected in their evaluation of the generated messages.\nFor message selection, we did not find any significant moderating effects; thus, our H4 was not supported. However, source disclosure significantly decreased the number of AI-generated messages selected for those with moderate levels of negative attitudes towards AI. These results further provide support for people's preference against AI-generated messages. We discuss the theoretical and practical implications of our findings in the next section." }, { "figure_ref": [], "heading": "Overall Discussion", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Summary of the Findings", "publication_ref": [ "b4", "b25", "b29", "b34", "b52", "b60" ], "table_ref": [], "text": "Overall, our two studies provide qualified support for our hypotheses. We found that disclosing the source led to lower ratings of AI-generated messages (partially supporting H1 in Study 1) and that negative attitudes toward AI moderated this effect (supporting H3 in Study 2). Though the analyses of ranking and message selection tasks did not support our hypotheses (H2 in Study 1 and H4 in Study 2), they revealed interesting effects: source disclosure decreased the number of AI-generated messages selected for those with moderate levels of negative attitudes towards AI. These results suggest a slight negative bias against AI-generated messages, aligning with previous studies that showed hesitation and slight negative bias against the communicative content when participants believed AI was involved in the process (Asscher and Glikson, 2023;Jakesch et al., 2019;Karinshak et al., 2023;Liu et al., 2022;Ragot et al., 2020;Shank et al., 2023)." }, { "figure_ref": [], "heading": "Implications for Source Effects Research", "publication_ref": [ "b43", "b7", "b31", "b61", "b15", "b53" ], "table_ref": [], "text": "This paper contributes to the emerging area of study at the intersection of communication and AI by being one of the first papers to examine how knowing the source changes people's evaluation of and preference for AI-generated messages.\nThe source of a message has always been an integral part of theories and models of communication and persuasion, even going back to Aristotle's rhetoric theory (Murphy, 1981). Likewise, early models of social scientific communication research, such as Berlo's SMCR model (Berlo, 1960), Lasswell's model of communication (Lasswell, 1948), and even the Shannon-Weaver model of communication (Shannon, 1948) all included components about the source, or the creator and deliverer of the message. Since then, a plethora of studies have studied source effects, or how various characteristics of the source impact the way people receive, process, and subsequently make judgments about the message. These studies often manipulated certain aspects about the source (e.g., expert vs. nonexpert; Clark et al. (2012)) and examined in which scenarios the various levels led to greater persuasive outcomes (e.g., when people had little information about a product, they relied on expert sources, but not necessarily when they had more information; Ratneshwar and Chaiken (1991)).\nWith the rise of AI-based technologies such as LLMs, source effects have once again come to the forefront of communication research, but the notion of \"source\" for AI-generated messages is quite complex. In particular, the message generation process for LLMs generally consists of the following steps: First, a human user feeds prompts, or intentionally crafted instructions or beginning parts of the message, to the LLM; second, the user adjusts the parameters, such as how many messages should be crafted and other factors; third, the LLM generates the messages according to step 1 and 2. Thus, in this process, it is actually a human who initiates the message creation sequence, whereas the LLM only completes the message generation command. It seems plausible to assume that people's knowledge of AI (and their perceptions of its expertise, trustworthiness, etc.), will impact their evaluations. For example, we may surmise that it would not only matter that a message was AI-generated, but also whom people believe to have started the process. In other words, if people think that the AI message generation was initiated by expert organizations, such as the Center for Disease Control (CDC), evaluation might differ compared to AI-generated messages initiated by general users of social media platforms, or even by agents from a foreign country. In sum, with AI, there is an intersection of source effects, perceptions of AI, and various social-cognitive inferences about creator, intent, and expertise. Going forward, it will thus be important to comprehensively study these topics. Based on the present results, we can say that there are small but significant effects of source disclosure, consistent with a small preferential treatment for human-generated messages." }, { "figure_ref": [], "heading": "Implications for Public Health Campaigns", "publication_ref": [ "b28", "b69", "b39", "b40" ], "table_ref": [], "text": "The current results have interesting implications for research on health message generation and dissemination. With the advent of AI-language models, it has become extremely easy to generate high-quality health messages about any given topic. This potential can either be a blessing or a curse, depending on the source and their intent. For instance, if the CDC leveraged the power of AI for health message generation, this would be seen as largely beneficial; however, malicious actors could also leverage AI to spread fake news -or even just promote unhealthy products (e.g., cigarettes). Indeed, there are already commercial applications of AI-LLMs for copywriting purposes, and these could also be used to influence users towards unhealthy, risky, or other kinds of behaviors. Thus, more work is needed to explore how these aspects intersect with the topic of AI-as-message-source as well as the influence of source disclosure.\nA related concern is about the factual truthfulness of healthrelated claims. It is well known that although LLMs are capable of generating persuasive messages, they are prone to hallucinations (Kaddour et al., 2023;Zhang et al., 2023). Although the creators of AI systems are investing large efforts to minimize such false generations, this is still an unsolved problem of the underlying technology, which will affect the evaluation of AI systems (Marcus, 2018(Marcus, , 2020)), particularly whether AIs are seen as knowledgeable, reliable, and trustworthy. In sum, while we can expect that AI-generated messages will increasingly find their way into real-world health campaigns, numerous questions persist about their accuracy and the intent of the humans generating the messages using AI. At this point in time, the dynamically evolving landscape of AI-language generation systems prevents any final answers to these questions. Rather, longitudinal research would be needed to assess how people think about AI sources, how they adapt to the increasing prevalence of AI content, and how their evaluations are influenced by contextual factors." }, { "figure_ref": [], "heading": "Limitations, Future Avenues, and Ethical Considerations", "publication_ref": [], "table_ref": [], "text": "As with all research, several limitations that require future research and important ethical considerations are worth highlighting. One limitation is that this study used tweets as messages. It would be interesting to examine other kinds of health messages, such as longer flyers and posters. The decision to use tweets was made because we wanted to take into account usergenerated messages and because tweets have become a rather widespread form of health communication content that also gets used by the CDC and other key health organizations. In addition, the topic of AI-generated health messages raises ethical questions. In particular, the regulatory framework around these topics is currently in flux, and discussions about mandatory labeling of AI-generated content have barely even begun. Furthermore, the allowed use cases for AI content generation are also debated. For instance, using AI to generate medical diagnoses is explicitly prohibited by the creators, but generating general health information falls within the range of acceptable use (Bigscience, 2022)." }, { "figure_ref": [], "heading": "Summary and Conclusion", "publication_ref": [], "table_ref": [], "text": "Taken together, we examined the influence of source disclosure on evaluations of AI-generated messages. We found that source disclosure (i.e., labeling the source of a message as AI vs. human) significantly impacted the evaluation of the messages, albeit the effects were of relatively small magnitude, but did not significantly alter message rankings. Moreover, in study 2 we found a significant moderating effect of negative attitudes toward AI on message evaluation. Our results show that at the point when we conducted our research, humans appear to exhibit a small preference for human-generated content if they know the source, but AI-generated messages are evaluated as equally good, if not better, if the source stays unknown. These results highlight the role of source factors for communication, and they have implications for the potential labeling of AI-generated content in the context of health promotion efforts." }, { "figure_ref": [], "heading": "Funding Statement", "publication_ref": [], "table_ref": [], "text": "This work was supported in part through Michigan State University's Institute for Cyber-Enabled Research Cloud Computing Fellowship, with computational resources and services provided by Information Technology Services and the Office of Research and Innovation at Michigan State University. The work was additionally supported by the Strosacker Grant from Michigan State University's Health and Risk Communication Center." } ]
Advancements in artificial intelligence (AI) over the last decade demonstrate that machines can exhibit communicative behavior and influence how humans think, feel, and behave. In fact, the recent development of ChatGPT has shown that large language models (LLMs) can be leveraged to generate high-quality communication content at scale and across domains, suggesting that they will be increasingly used in practice. However, many questions remain about how knowing the source of the messages influences recipients' evaluation of and preference for AI-generated messages compared to human-generated messages. This paper investigated this topic in the context of vaping prevention messaging. In Study 1, which was pre-registered, we examined the influence of source disclosure on people's evaluation of AI-generated health prevention messages compared to human-generated messages. We found that source disclosure (i.e., labeling the source of a message as AI vs. human) significantly impacted the evaluation of the messages but did not significantly alter message rankings. In a follow-up study (Study 2), we examined how the influence of source disclosure may vary by the participants' negative attitudes towards AI. We found a significant moderating effect of negative attitudes towards AI on message evaluation, but not for message selection. However, for those with moderate levels of negative attitudes towards AI, source disclosure decreased the preference for AI-generated messages. Overall, the results of this series of studies showed a slight bias against AI-generated messages once the source was disclosed, adding to the emerging area of study that lies at the intersection of AI and communication.
The effect of source disclosure on evaluation of AI-generated messages: A two-part study
[ { "figure_caption": "Figure 1 :1Figure 1: Conceptual Diagram of Study Design", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Mean Effects Perception Scores by Experimental Condition.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Mean Rank Difference Scores by Experimental Condition.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Predicted EP Ratings with Negative Attitudes as Moderator", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Predicted Number of AI-Generated Messages Selected", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Influence of Source Disclosure on EP Scores Experimental Group; MS = Message Source.", "figure_data": "F-score p-valueη 2SD: Disclosed (vs. Not Disclosed).072.79.00049MS: Human (vs. AI)10.25.0017.0039SD: Disclosed & MS: Human4.73.031.0018Note. SD =", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "The Effect of Source Disclosure by Message Source", "figure_data": "Mean (Standard Deviation)t-score (p-value)Source Not DisclosedSource DisclosedAI-Generated4.31 (.63)4.23 (.48).82 (.41)Human-Generated4.18 (.73)4.21 (.50)-.23 (.82)", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Results from Mixed Effects Linear Model Experimental Group; MS = Message Source; S.E. = Standard Error", "figure_data": "TermEstimateS.E.t-scorep-valueIntercept3.32.3010.98<.001SD: Disclosed (vs. Not Disclosed).61.451.34.18MS: Human (vs. AI)-.46.098-4.69<.001Negative Attitude Towards AI.25.0972.61.0098MS: Human & SD: Disclosed.48.153.26.0011SD: Disclosed & Negative Attitude-.20.14-1.36.18MS: Human & Negative Attitude.099.0313.15.0017MS: Human & SD: Disclosed & Neg Attitude-.14.047-2.98.0029Note. SD =", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Pairwise Comparison of EP by Negative Attitudes Towards AI", "figure_data": "Disclosed -NonDisclosedS.E.z-scorep-value", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Attitudes Towards AI and Selection of AI-Generated Messages", "figure_data": "bS.E.z-scorep-valueIntercept.74.252.97.003SD: Disclosed (vs. Not Disclosed)-.09.39-.24.81Negative Attitudes Towards AI.07.079.90.37SD: Disclosed & Negative Attitudes Towards AI-.04.12-.34.73Note. S.E. = Standard Error", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Pairwise Comparison of AI-Message Selection by Negative Attitudes Towards AI", "figure_data": "Disclosed -NonDisclosedS.E.z-scorep-valueNegative Attitudes Towards AI = 2.24 *-.42.31-1.33.18Negative Attitudes Towards AI = 3.04-.51.23-2.27.023Negative Attitudes Towards AI = 3.84-.62.33-1.88.06Note. S.E. = Standard Error; *3.04 = overall mean; 2.24 = mean -1 standard deviation;3.84 = mean + 1 standard deviation", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" } ]
Sue Lim; Ralf Schmälzle
[ { "authors": "J Abalo; J Varela; V Manzano", "journal": "Journal of Business Research", "ref_id": "b0", "title": "Importance values for importanceperformance analysis: A formula for spreading out values derived from preference rankings", "year": "2007" }, { "authors": "S Ali; S Ronaldson", "journal": "British Medical Bulletin", "ref_id": "b1", "title": "Ordinal preference elicitation methods in health economics and health services research: using discrete choice experiments and ranking methods", "year": "2012" }, { "authors": "J Allem; P Escobedo; K Chu; D Soto; T Cruz; J Unger", "journal": "Tobacco Control", "ref_id": "b2", "title": "Campaigns and counter campaigns: reactions on twitter to e-cigarette education", "year": "2017" }, { "authors": "D Alwin; J Krosnick", "journal": "Public Opinion Quarterly", "ref_id": "b3", "title": "The measurement of values in surveys: A comparison of ratings and rankings", "year": "1985" }, { "authors": "O Asscher; E Glikson", "journal": "New Media & Society", "ref_id": "b4", "title": "Human evaluations of machine translation in an ethically charged situation", "year": "2023" }, { "authors": "S Baig; S Noar; N Gottfredson; M Boynton; K Ribisl; N Brewer", "journal": "Annals of Behavioral Medicine", "ref_id": "b5", "title": "Unc perceived message effectiveness: validation of a brief scale", "year": "2019" }, { "authors": "L Bellaiche; R Shahi; M Turpin; A Ragnhildstveit; S Sprockett; N Barr; A Christensen; P Seli", "journal": "Cognitive Research", "ref_id": "b6", "title": "Humans versus ai: whether and why we prefer human-created compared to ai-created artwork", "year": "2023" }, { "authors": "D Berlo", "journal": "Holt, Rinehart, and Winston", "ref_id": "b7", "title": "The Process of Communication", "year": "1960" }, { "authors": "E Boakye; N Osuji; J Erhabor; O Obisesan; A Osei; M Mirbolouk; M Blaha", "journal": "JAMA Network Open", "ref_id": "b8", "title": "Assessment of patterns in e-cigarette use among adults in the us, 2017-2020", "year": "2022" }, { "authors": "M Boynton; N Sanzo; W Brothers; A Kresovich; E Sutfin; P Sheeran; S Noar", "journal": "Tobacco Control", "ref_id": "b9", "title": "Perceived effectiveness of objective elements of vaping prevention messages among adolescents", "year": "2023" }, { "authors": "S Bubeck; V Chandrasekaran; R Eldan; J Gehrke; E Horvitz; E Kamar; Y Zhang", "journal": "", "ref_id": "b10", "title": "Sparks of artificial general intelligence: Early experiments with gpt-4", "year": "2023" }, { "authors": "N Castelo; A Ward", "journal": "Plos One", "ref_id": "b11", "title": "Conservatism predicts aversion to consequential artificial intelligence", "year": "2021" }, { "authors": "J Chambers; A Freeny; R Heiberger", "journal": "", "ref_id": "b12", "title": "Analysis of variance; designed experiments", "year": "1992" }, { "authors": "S Chen; S Chaiken", "journal": "The Guilford Press", "ref_id": "b13", "title": "The heuristic-systematic model in its broader context, in: Dual-process theories in social psychology", "year": "1999" }, { "authors": "E Clark; C Jones; J Williams; A Kurti; M Norotsky; C Danforth; P Dodds", "journal": "PloS One", "ref_id": "b14", "title": "Vaporous marketing: Uncovering pervasive electronic cigarette advertisements on twitter", "year": "2016" }, { "authors": "J Clark; D Wegener; M Habashi; A Evans", "journal": "Personality and Social Psychology Bulletin", "ref_id": "b15", "title": "Source expertise and persuasion: The effects of perceived opposition or support on message scrutiny", "year": "2012" }, { "authors": "L Collins; A Glasser; H Abudayyeh; J Pearson; A Villanti", "journal": "Nicotine and Tobacco Research", "ref_id": "b16", "title": "Ecigarette marketing and communication: How e-cigarette companies market e-cigarettes and the public engages with e-cigarette information", "year": "2019" }, { "authors": "B Dewilde", "journal": "", "ref_id": "b17", "title": "Textacy: Nlp, before and after spacy", "year": "2020-09-10" }, { "authors": "W J Von Eschenbach", "journal": "Philosophy & Technology", "ref_id": "b18", "title": "Transparency and the black box problem: Why we do not trust ai", "year": "2021" }, { "authors": "A Grummon; M Hall; C Mitchell; M Pulido; J Sheldon; S Noar; K Ribisl; N Brewer", "journal": "Tobacco Control", "ref_id": "b19", "title": "Reactions to messages about smoking, vaping and covid-19: Two national experiments", "year": "2022" }, { "authors": "B Grün; K Hornik", "journal": "Journal of Statistical Software", "ref_id": "b20", "title": "topicmodels: An r package for fitting topic models", "year": "2011" }, { "authors": "J Hirschberg; C Manning", "journal": "Science", "ref_id": "b21", "title": "Advances in natural language processing", "year": "2015" }, { "authors": "M Honnibal; I Montani", "journal": "", "ref_id": "b22", "title": "spacy: Industrial-strength natural language processing in python", "year": "2020" }, { "authors": "C Hutto; E Gilbert", "journal": "", "ref_id": "b23", "title": "Vader: A parsimonious rule-based model for sentiment analysis of social media text", "year": "2014" }, { "authors": "E Ismagilova; E Slade; N Rana; Y Dwivedi", "journal": "Journal of Retailing and Consumer Services", "ref_id": "b24", "title": "The effect of characteristics of source credibility on consumer behaviour: A meta-analysis", "year": "2020" }, { "authors": "M Jakesch; M French; X Ma; J Hancock; M Naaman", "journal": "", "ref_id": "b25", "title": "Ai-mediated communication: How the perception that profile text was written by ai affects trustworthiness", "year": "2019" }, { "authors": "E Jussupow; I Benbasat; A Heinzl", "journal": "", "ref_id": "b26", "title": "Why are we averse towards algorithms? a comprehensive literature review on algorithm aversion", "year": "2020" }, { "authors": "", "journal": "JustAnotherArchivist", "ref_id": "b27", "title": "snscrape: A social networking service scraper in python", "year": "2021" }, { "authors": "J Kaddour; J Harris; M Mozes; H Bradley; R Raileanu; R Mchardy", "journal": "", "ref_id": "b28", "title": "Challenges and applications of large language models", "year": "2023" }, { "authors": "E Karinshak; S Liu; J Park; J Hancock", "journal": "", "ref_id": "b29", "title": "Working with ai to persuade: Examining a large language model's ability to generate pro-vaccination messages", "year": "2023" }, { "authors": "C Lagerkvist", "journal": "Food Quality and Preference", "ref_id": "b30", "title": "Consumer preferences for food labelling attributes: Comparing direct ranking and best-worst scaling for measurement of attribute importance, preference intensity and attribute dominance", "year": "2013" }, { "authors": "H Lasswell", "journal": "Institute for Religious and Social Studies", "ref_id": "b31", "title": "The structure and function of communication in society", "year": "1948" }, { "authors": "S Lim; R Schmälzle", "journal": "Frontiers in Communication", "ref_id": "b32", "title": "Artificial intelligence for health message generation: an empirical study using a large language model (llm) and prompt engineering", "year": "2023" }, { "authors": "S Liu; J Yang", "journal": "Risk Analysis", "ref_id": "b33", "title": "Incorporating message framing into narrative persuasion to curb e-cigarette use among college students", "year": "2020" }, { "authors": "Y Liu; A Mittal; D Yang; A Bruckman", "journal": "", "ref_id": "b34", "title": "Will ai console me when i lose my pet? understanding perceptions of ai-mediated email writing", "year": "2022" }, { "authors": "C Longoni; A Bonezzi; C Morewedge", "journal": "Journal of Consumer Research", "ref_id": "b35", "title": "Resistance to medical artificial intelligence", "year": "2019" }, { "authors": "G Luger", "journal": "Pearson Education", "ref_id": "b36", "title": "Artificial Intelligence: Structures and strategies for complex problem solving", "year": "2005" }, { "authors": "J Lyu; G Luli; P Ling", "journal": "PloS One", "ref_id": "b37", "title": "Vaping discussion in the covid-19 pandemic: An observational study using twitter data", "year": "2021" }, { "authors": "T Ma; D Atkin", "journal": "Telematics and Informatics", "ref_id": "b38", "title": "User generated content and credibility evaluation of online health information: A meta analytic study", "year": "2017" }, { "authors": "G Marcus", "journal": "", "ref_id": "b39", "title": "Deep learning: A critical appraisal", "year": "2018" }, { "authors": "G Marcus", "journal": "", "ref_id": "b40", "title": "The next decade in ai: four steps towards robust artificial intelligence", "year": "2020" }, { "authors": "O Miles; R West; T Nadarzynski", "journal": "Digital Health", "ref_id": "b41", "title": "Health chatbots acceptability moderated by perceived stigma and severity: A cross-sectional survey", "year": "2021" }, { "authors": "M Mitchell", "journal": "Penguin UK", "ref_id": "b42", "title": "Artificial Intelligence: A guide for thinking humans", "year": "2019" }, { "authors": "J J Murphy", "journal": "University of California Press", "ref_id": "b43", "title": "Rhetoric in the Middle Ages: A history of rhetorical theory from Saint Augustine to the Renaissance", "year": "1981" }, { "authors": "K Nahon; J Hemsley", "journal": "Polity", "ref_id": "b44", "title": "Going viral", "year": "2013" }, { "authors": "S M Noar; J A Rohde; H Prentice-Dunn; A Kresovich; M G Hall; N T Brewer", "journal": "Addictive Behaviors", "ref_id": "b45", "title": "Evaluating the actual and perceived effectiveness of e-cigarette prevention advertisements among adolescents", "year": "2020" }, { "authors": "D J O'keefe", "journal": "Sage Publications", "ref_id": "b46", "title": "Persuasion: Theory and research", "year": "2015" }, { "authors": "S Ozawa; C Wonodi; O Babalola; T Ismail; J Bridges", "journal": "Vaccine", "ref_id": "b47", "title": "Using best-worst scaling to rank factors affecting vaccination demand in northern nigeria", "year": "2017" }, { "authors": "S Palan; C Schitter", "journal": "Journal of Behavioral and Experimental Finance", "ref_id": "b48", "title": "Prolific. ac-a subject pool for online experiments", "year": "2018" }, { "authors": "R E Petty; J T Cacioppo", "journal": "Springer", "ref_id": "b49", "title": "The elaboration likelihood model of persuasion", "year": "1986" }, { "authors": "C Pornpitakpan", "journal": "Journal of Applied Social Psychology", "ref_id": "b50", "title": "The persuasiveness of source credibility: A critical review of five decades' evidence", "year": "2004" }, { "authors": "Y T Prasetyo; R S Dewi; N M Balatbat; M L B Antonio; T Chuenyindee; A A N Perwira Redi; M N Young; J F T Diaz; Y B Kurata", "journal": "Healthcare", "ref_id": "b51", "title": "The evaluation of preference and perceived quality of health communication icons associated with covid-19 prevention measures", "year": "2021" }, { "authors": "M Ragot; N Martin; S Cojean", "journal": "", "ref_id": "b52", "title": "Ai-generated vs. human artworks. a perception bias towards artificial intelligence?", "year": "2020" }, { "authors": "S Ratneshwar; S Chaiken", "journal": "Journal of Consumer Research", "ref_id": "b53", "title": "Comprehension's role in persuasion: The case of its moderating effect on the persuasive impact of source cues", "year": "1991" }, { "authors": "N Reimers; I Gurevych", "journal": "", "ref_id": "b54", "title": "Sentence-bert: Sentence embeddings using siamese bert-networks", "year": "2019" }, { "authors": "J A Rohde; S M Noar; H Prentice-Dunn; A Kresovich; M G Hall", "journal": "Health Communication", "ref_id": "b55", "title": "Comparison of message and effects perceptions for the real cost e-cigarette prevention ads", "year": "2021" }, { "authors": "S Russell; P Norvig", "journal": "Prentice Hall", "ref_id": "b56", "title": "Artificial intelligence: A modern approach 4th Edition", "year": "2021" }, { "authors": "T Scao; A Fan; C Akiki; E Pavlick; S Ilić; D Hesslow; M Manica", "journal": "", "ref_id": "b57", "title": "Bloom: A 176b-parameter open-access multilingual language model", "year": "2022" }, { "authors": "A Schepman; P Rodway", "journal": "International Journal of Human-Computer Interaction", "ref_id": "b58", "title": "The general attitudes towards artificial intelligence scale (gaais): Confirmatory validation and associations with personality, corporate distrust, and general trust", "year": "2023" }, { "authors": "R Schmälzle; S Wilcox", "journal": "Journal of Medical Internet Research", "ref_id": "b59", "title": "Harnessing artificial intelligence for health message generation: The folic acid message engine", "year": "2022" }, { "authors": "D B Shank; C Stefanik; C Stuhlsatz; K Kacirek; A M Belfi", "journal": "Journal of Experimental Psychology: Applied", "ref_id": "b60", "title": "Ai composer bias: Listeners like music less when they think it was composed by an ai", "year": "2023" }, { "authors": "C Shannon", "journal": "Bell Systems Technical Journal", "ref_id": "b61", "title": "A mathematical theory of communication", "year": "1948" }, { "authors": "P Slovic", "journal": "Science", "ref_id": "b62", "title": "The psychometric paradigm", "year": "1987" }, { "authors": "L Tunstall; L Von Werra; T Wolf", "journal": "O'Reilly Media, Inc", "ref_id": "b63", "title": "Natural language processing with Transformers", "year": "2022" }, { "authors": "A C Villanti; S E Lepine; J C West; T B Cruz; E M Stevens; H J Tetreault; D Mays", "journal": "Addictive Behaviors", "ref_id": "b64", "title": "Identifying message content to reduce vaping: Results from online message testing trials in young adult tobacco users", "year": "2021" }, { "authors": "T W Wang; A S Gentzke; L J Neff; E V Glidden; A Jamal; E Park-Lee; K A Hacker", "journal": "JAMA Network Open", "ref_id": "b65", "title": "Characteristics of e-cigarette use behaviors among us youth", "year": "2020" }, { "authors": "Y Wang; Y A Xu; J Wu; H M Kim; J L Fetterman; T Hong; M L Mclaughlin", "journal": "Health Communication", "ref_id": "b66", "title": "Moralization of e-cigarette use and regulation: A mixed-method computational analysis of opinion polarization", "year": "2023" }, { "authors": "J Wei; X Wang; D Schuurmans; M Bosma; F Xia; E H Chi; V Le; Q Zhou; D ", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b67", "title": "Chain-of-thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "E J Wilson; D L Sherrell", "journal": "Journal of the Academy of Marketing Science", "ref_id": "b68", "title": "Source effects in communication and persuasion research: A meta-analysis of effect size", "year": "1993" }, { "authors": "Y Zhang; Y Li; L Cui; D Cai; L Liu; T Fu; X Huang; E Zhao; Y Zhang; Y Chen; L Wang; A T Luu; W Bi; F Shi; S Shi", "journal": "", "ref_id": "b69", "title": "Siren's song in the ai ocean: A survey on hallucination in large language models", "year": "2023" }, { "authors": "Z Zhang; K H Yuan", "journal": "ISDSA Press", "ref_id": "b70", "title": "Practical statistical power analysis using Webpower and R", "year": "2018" }, { "authors": "S Zhou; J Silvasstar; C Clark; A J Salyers; C Chavez; S S Bull", "journal": "Digital Health", "ref_id": "b71", "title": "An artificially intelligent, natural language processing chatbot designed to promote covid-19 vaccination: A proof-of-concept pilot study", "year": "2023" } ]
[]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b1", "b54", "b59", "b70", "b54", "b70" ], "table_ref": [], "text": "Considerable advancements have been achieved in the realm of 2D image generation recently. The generation of high-fidelity images through input text prompts has become a straightforward process. However, the translation of this success from text-to-image generation to the text-to-3D domain faces challenges due to the limited availability of 3D training data. To circumvent the need for training an extensive text-to-3D generative model from scratch, given the scarcity of 3D data, recent methods have capitalized on the favorable characteristics of diffusion models and differentiable 3D representations. These methods, rooted in score distillation sampling optimization (SDS), endeavor to extract 3D knowledge from a pre-trained, large text-to-image generative model, yielding impressive results. One notable example of such work is DreamFusion, which introduces a novel paradigm for 3D asset generation.\nIn light of the 2D-to-3D distillation approach, there has been a rapid evolution of techniques in the past year. Numerous studies have emerged, aiming to enhance the quality of generation through the implementation of multiple optimization stages. Although those methods are able to deliver a bird standing on a book a cocker spaniel wearing a crown a panda rowing a boat a train engine made out of clay a tank made of sushi a frazer nash super sport car Figure 2. Example 3D assets generated by ET3D. Our network is able to generate text controlled 3D objects in only 8 ms.\nimpressive quality of the generated 3D objects, they usually require hours to finish the optimization process, which would degrade the user experience and also impose a burden to the service providers due to the requirement of more computational resources. To tackle the efficiency issue of existing text-to-3D generation methods, Lorraine et al. recently proposed ATT3D [32]. The main insight is that they design a feed-forward mapping network, which maps the input text prompt to the parameters of a neural radiance field (NeRF). Inspired by the recent development of large text-tomulti-view image generative models [54,59] and StyleA-vatar3D [70], we propose to train a text-to-3D generative model via multi-view distillation. The main insight is to exploit a pre-trained large image generative model as a teacher and distill multi-view knowledge to supervise the training of our text-to-3D model, i.e. as a student network. In particular, we employ the pre-trained teacher network (e.g. MVDream [54]) to generate multi-view images given a text prompt. We then train a text-conditioned generative adversarial network to generate a tri-plane represented 3D object, such that its rendered multi-view images follow the same distribution as that of the pre-trained text-to-multi-view model. Different from StyleAvatar3D [70], our method does not require any prior 3D model and can scale to general text-to-3D generation task. Once our network is trained, we are able to generate a 3D object given a text prompt in only 8 ms on an NVIDIA RTX 4090 graphic card. It significantly accelerates the generation speed and reduces the computational expenses, to further democratize 3D content creation. In summary, our contributions are as follows:\n• We propose a simple yet effective text conditioned 3D generative adversarial network;\n• Our network can be trained by distilling multi-view knowledge from a pre-trained large text-to-multiview image generative model, without requiring SDS loss and any 3D dataset;\n• Once our network is trained, it can generate a 3D asset given a text prompt, in only 8 ms on a consumergrade graphic card. It significantly reduces the computational budget and provide the user with real-time experience;\n• It demonstrates the possibility to train efficient general text-to-3D generative model by relying on pre-trained large text-to-multi-view image diffusion model;\n• We would like to draw the attention of the community, that it would be a worthwhile direction to explore for efficient text-to-3D content generation, by exploiting pre-trained text-to-multi-view foundation models." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "We review prior methods which are the most related to ours. We classify them into three categories: unconditional 3D generation, text conditioned 3D generation and 3D aware image synthesis." }, { "figure_ref": [], "heading": "3D generative models.", "publication_ref": [ "b64", "b66", "b11", "b17", "b57", "b0", "b41", "b55", "b67", "b42", "b62", "b11", "b41", "b43", "b64", "b66", "b0", "b27", "b67", "b10", "b69", "b12", "b50", "b45", "b23", "b47", "b28", "b58", "b61", "b63", "b15", "b56", "b8", "b64", "b75", "b18", "b6", "b16", "b40" ], "table_ref": [], "text": "Unconditional 3D generation methods typically utilize existing 3D datasets to train generative models that employ various 3D representations. These representations commonly include volumetric representation [4,14,64,66], triangular mesh [12,15,18,57,68], point cloud [1,40,42,55,67], and the more recent implicit neural representation [6,10,35,43,53,62]. In the realm of 3D data, researchers have explored various generative modeling techniques that have demonstrated success in 2D image synthesis. These techniques encompass a range of methods, such as variational auto-encoders [3, 15, 57, 65], generative adversarial networks [6, 12,42,44,64,66], flow-based methods [1,2,28,67], and the increasingly popular diffusionbased method [11,22,33,37,69,72]. However, unlike image generative modeling, which benefits from a large abundance of training images, 3D generative methods often face a scarcity of sufficient 3D assets for training purposes. Typically, they are confined to category-specific datasets, such as shapeNet [8]. Although there has been a recent release of a million-scale 3D asset dataset by Objaverse [13], its size still pales in comparison to the vast amounts of 2D training data [51] employed by modern generative models for image synthesis. The limited availability of extensive training data poses a challenge for these 3D generative methods, as they struggle to generate arbitrary types of objects that can meet the diverse requirements of end consumers. In contrast to these methods that rely on copious amounts of 3D data, we propose an alternative approach that leverages a pre-trained large text-to-multi-view image generative model. By distilling multi-view knowledge, our proposed method aims to facilitate more generalized text-to-3D generation capabilities.\nText conditioned 3D generation. Owing to the scarcity of 3D data, researchers have endeavored to extract knowledge for 3D generation by utilizing pre-trained large image models. Initially, efforts were made to employ a pre-trained CLIP model [46] to align the input text prompt with rendered images, aiming to supervise the process of 3D object generation [24,36,49]. However, the resulting 3D objects often exhibited a decreased level of realism, primarily due to the fact that CLIP could only provide high-level semantic guidance. With the advancement of large textto-image diffusion models [48], a notable example being DreamFusion [45], the potential to generate more realistic 3D objects through knowledge distillation has been demonstrated. Subsequent works have consistently pushed the boundaries to achieve the generation of photo-realistic 3D objects that closely correspond to the provided text prompts [9,21,29,31,47,58,61,63,74]. These methods typically offer valuable insights by developing more sophisticated score distillation loss functions or by refining optimization strategies, to further enhance the quality of the generated objects. Despite the success achieved by these methods in generat-ing high-fidelity 3D shapes based on textual descriptions, they usually require hours to complete the text-to-3D shape generation process. It degrades the user experience and imposes additional economic burden to the service providers. Consequently, we propose to train an efficient text-to-3D generative model via multi-view distillation. Once our network is trained, we are able to generate 3D objects given text prompts in real-time on a consumer-grade graphic card.\n3D aware image synthesis. The exploration of extending 2D generative adversarial networks (GANs) [16] to the realm of 3D has been extensively researched, primarily due to the advantage of not requiring a dedicated 3D dataset. A key concept behind this approach involves training a GAN capable of generating 3D representations based on 2D images. Multiple forms of 3D representations have been investigated, including triangular mesh [30,56], volumetric representation [14,20,38,39,64,75], and tri-plane [5] etc. Among these options, tri-plane stands out as an efficient choice due to its low memory consumption and fast image rendering, making it well-suited for GAN training. Moreover, there are alternative methods such as GANcraft [19], which utilize sparse voxel grids for 3D scene generation, as well as fully implicit NeRF-based techniques that replace traditional generators with radiance fields [7,17,41,52,73]. Although these approaches have demonstrated remarkable capabilities in generating high-quality 3D assets, they are often limited to class-specific tasks and lack the flexibility to enable control over the generation process through textual input. Consequently, we propose ET3D for text-to-3D generation by distilling knowledge from a pre-trained large image diffusion model." }, { "figure_ref": [], "heading": "Method", "publication_ref": [ "b63" ], "table_ref": [], "text": "Our goal is to propose a new text-to-3D generation paradigm utilizing multi-view images synthesized by a large pre-trained image diffusion model. Although the text-to-multi-view diffusion model can generate impressive multi-view images, these images still lack pixel-wise consistency such that they can be used to reconstruct 3D assets directly. Instead of using Score Distillation Sampling (SDS) loss to align both data distributions, i.e. which has shown to suffer from over-saturation, over-smoothing, low diversity and the multi-face Janus problem [45,63], we propose to exploit Generative Adversarial Network (GAN) to learn the real data distribution.\nOur method consists of two main parts as shown in fusion model. Since there is no prior text-to-3D GAN network available, we propose a simple yet effective network based upon EG3D [6], due to its impressive performance in un-conditional 3D aware image synthesis from categoryspecific multi-view image dataset. We will detail each component as follows." }, { "figure_ref": [], "heading": "Text-to-multi-view image diffusion model.", "publication_ref": [ "b54" ], "table_ref": [], "text": "Without loss of generality, we exploit a recently proposed text-to-multi-view image diffusion model, i.e. MV-Dream [54], as our teacher model. More advanced text-tomulti-view foundation models can also be used in future. MVDream is able to generate multi-view consistent images from a given text prompt. It achieves both the generalizability of 2D diffusion and the consistency of 3D data, by leveraging diffusion models pre-trained on large-scale web datasets and a multi-view dataset rendered from 3D assets.\nMVDream accepts a text prompt and four extrinsic camera parameters as input, it then generates four viewconsistent images which satisfy the input text prompt each time. The current released pre-trained model enforces the four views to be 90 • separated for the longitude angle and share the same elevation angle, which ranges within [0 • , 30 • ]. During the training of the student network, we sample multiple times for the same text prompt and the starting longitude angle is randomly selected within [0 • , 360 • ] each time. We note that the generated images are not always consistent between two samples (i.e. 8 images in total) even the input text prompts are the same. However, we found that our student network is not affected and still can learn to generate 3D assets properly." }, { "figure_ref": [], "heading": "Text-to-3D generative model.", "publication_ref": [], "table_ref": [], "text": "Our student model is built upon EG3D [6], a state-ofthe-art 3D-aware image synthesis GAN network, which can learn from images only without requiring any 3D assets. As shown in Fig. 3, it consists of five key components: a mapping network, a tri-plane generator network, a neural renderer, a super-resolution module and a discriminator network. We will describe each component briefly as follows. More detailed network architecture can be found in our supplementary material." }, { "figure_ref": [], "heading": "Mapping network.", "publication_ref": [ "b22", "b25", "b22" ], "table_ref": [], "text": "The mapping network takes a latent variable z ∈ R 512 , camera parameters P ∈ R 25 and text embedding T ∈ R 768 as input, and map them into a 1280-dim feature vector. Both the latent variable and camera parameters are mapped into a 512-dim feature vector via EG3D's original MLP network. We then use a pre-trained CLIP model [23] to encode the input text prompt into a 768-dim feature vector. Both feature vectors are then concatenated together to form the final 1280-dim feature vector for both the tri-plane generator network and the superresolution module. Tri-plane generator network. By compromising the rendering efficiency and representation ability, we choose to use tri-plane [6] to represent the 3D object implicitly. The generator network takes the 1280-dim feature vector as input and outputs the tri-plane feature images, each with a dimension R 256×256×32 . Neural renderer. Given the generated tri-plane feature images and the sampled camera pose, we can render a 2D feature image F ∈ R 128×128×32 via volume rendering. In particular, we can shoot a ray from the camera center towards the sampled pixel. Discrete 3D points can be sampled along the ray. For each 3D point, we can project it into the tri-planes to obtain three feature vectors: i.e. F XY , F XZ and F Y Z ∈ R 32 . They are then concatenated together and input to a tri-plane decoder to obtain the point density σ and a color feature vector c ∈ R 32 . The pixel feature vector can then be computed via:\nF(x) = n i=1 T i (1 -exp(-σ i δ i ))c i ,(1)\nwhere F(x) ∈ R 32 is the rendered feature vector at pixel position x, T i is the transmittance and can be computed via\nT i = exp(- i-1 k=1 σ k δ k )\n, both σ i and c i are the predicted density and color feature vector of the sampled i th 3D point, and δ i is the distance between two neighboring sampled points. Super-resolution module. To generate higher-resolution 3D assets, a super-resolution module is applied. It takes the rendered feature image F ∈ R 128×128×32 and the 1280-dim feature vector from mapping network as input, and predicts an image Ĩ ∈ R 256×256×3 as the final output image. Discriminator network.\nWe modify the discriminator network of StyleGAN2 [26] to exploit text prompt embedding as additional condition to train the generator network. Same as the mapping network, we use a pre-trained CLIP model [23] to encode the input text prompt, such that the discriminator can learn to differentiate images according to the provided text prompt." }, { "figure_ref": [], "heading": "Loss functions.", "publication_ref": [ "b26", "b49" ], "table_ref": [], "text": "Both the generator network and discriminator network are trained in an adversarial manner. Given images I from the pre-trained text-to-multi-view image diffusion model, with known camera parameters ξ I , K for both the extrinsics and intrinsics, latent codes z ∈ N (0, 1) and the corresponding text prompts t, we train our model using a GAN objective. R1-regularization is applied to further stabilize the training [34]:\nL(θ, ϕ) = E I∈p D (f (D ϕ (I, ξ I , t)) -λ ∥∇D ϕ (I, ξ I , t)∥ 2 ) (2) + E z∈N (0,1),ξ,ξ ′ ∈p ξ ,t [f (-D ϕ (G θ (z, ξ, ξ ′ , K, t), ξ ′ , t))], (3)\nwhere f (t) = -log(1 + exp(-t)) and λ controls the strength of the R1-regularizer. To better align the generated 3D asset with the textual description, we also apply a CLIP loss between the predicted image Ĩ and the text prompt, which has shown to be effective in prior methods [27,50,60]. Both the generator and discriminator are then trained with alternating gradient descent combining the GAN objective with the CLIP loss:\nmin θ max ϕ L(θ, ϕ) + λ c L clip (θ),(4)\nL clip (θ) = arccos 2 (enc i ( Ĩ), enc t (t)),(5)\nwhere λ c is a hyper-parameter, both enc i and enc t are the pre-trained CLIP image and text encoders. " }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Implementation details", "publication_ref": [], "table_ref": [], "text": "Our framework offers the flexibility of being trained either online or offline with MVDream. We experimentally find that offline training still can deliver satisfying results, even the number of training samples would be much smaller than the online training. For efficiency consideration, we construct a substantial dataset with a wide variety of animals, objects etc, facilitating offline training in this experimental setup. The dataset comprises compositions of animals, objects and styles, totaling up to 5,000 different prompts and 800,000 generated images at a resolution of 256 × 256 pixels. We hold out 100 prompts during training and use them to evaluate the compositional generalization performance. Our method has no restriction to be scaled for even larger amount of text prompts. We use a learning rate 2.5 × 10 -3 for the generator training and 2 × 10 -3 to train the discriminator network. The batch size is 32. The network is trained with 8 NVIDIA A100 graphic card. All the evaluations are conducted on a single RTX 4090 graphic card. We exploit the commonly used Frechet Inception Distance (FID) metric to evaluate the quality of rendered images from the generated 3D assets. CLIP score is used to evaluate the similarity between the input text prompt and the generated 3D asset." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Ablation study", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "We conduct experiments to study the effects of the CLIP loss and conditional text input for the discriminator network. The experimental results are presented in Table 1 and Fig. 4. It demonstrates that the textual condition on the discriminator network improves both the quality and text coherence of the generated 3D assets. The qualitative results shown in Fig. 4 also demonstrate that the generated 3D assets fail to satisfy the input textual description if we do not apply the text condition on the discriminator. Unexpectedly, the usage of CLIP loss does not improve the similarity between the text description and generated 3D asset. ViT-L/14↑ ViT-bigG-14↑ The reason might be the text condition of the discriminator network can already provide sufficient supervision on text control. However, we find the FID metric is improved from 7.7 to 7.4 when the CLIP loss is applied. Therefore, we still keep the CLIP loss during training to obtain a better image quality.\nWe also compare against EG3D [6], which does not support text controlled 3D generation. The experimental results demonstrate that the EG3D struggles to learn unconditional 3D generation from dataset with many different categories. It demonstrates that the additional text condition helps the network cluster the data distribution and ease the learning of the generative network." }, { "figure_ref": [], "heading": "Quantitative comparisons", "publication_ref": [ "b63", "b24", "b1" ], "table_ref": [ "tab_1" ], "text": "We compare ET3D against prior state-of-the-art methods for quantitative evaluations. We exploit two state-of-the-art SDS optimization based methods, i.e. DreamFusion [45] and ProlificDreamer [63]. We use a rendering image resolution at 64 × 64 pixels for the optimization of Prolific-Dreamer. We also compare against Shap-E [25], which is pre-trained with text-labeled 3D data. Another similar method is ATT3D [32]. However, we cannot compare against it since they do not release their implementations to the general public. We exploit the CLIP score and time consumption as metrics for the evaluation. The experimental results are presented in Table 2. The metrics are com-puted over 400 different objects. It demonstrates that our method is able to achieve similar or even better text similarity score, compared to DreamFusion, ProlificDreamer and Shap-E. On the other hand, the time required to generate a 3D asset by our method is only around 8 ms, which is 225000 times faster than DreamFusion and 450000 times faster than ProlificDreamer. The evaluations are conducted on a consumer graphic card, i.e. NVIDIA RTX 4090." }, { "figure_ref": [], "heading": "Qualitative comparisons", "publication_ref": [ "b1" ], "table_ref": [], "text": "The qualitative evaluation results are presented in Fig. 5. We present rendered images from two different views to evaluate their 3D consistency and texture quality. The surface normal or 3D mesh is also presented for geometry comparisons. The experimental results demonstrate that Dream-Fusion tends to generate over-saturated and blurry 3D assets due to the inherited characteristic of SDS loss. While Pro-lificDreamer improves the SDS loss and delivers impressive results, it still performs poorly for some text prompts. In contrary, our method delivers better results, in terms of both the texture quality and 3D consistency. It demonstrates the great potential to exploit multi-view images, generated by a pretrained image diffusion model, for high-quality text-to-3D content generation.\nTo demonstrate the generalization capability of our model to unseen prompts, we follow the experimental setting used by ATT3D [32]. In particular, it generates compositional prompts using the template \"a {animal} {activity} {theme}\" and withholding a subset of prompts as unseen for evaluation. Additionally, we further select 40 animals and 40 styles, employing the compositional prompt \"a {animal}, {style}\" and exploit the compositions along the diagonal direction to validate style generalization. The 3D objects generated from part of these unseen prompts are illustrated in 7. To better perceive consistent 3D objects, we render images of the same object from multi-views, i.e. 0 • , 90 • , 180 • and 270 • . Please refer to the Appendix and supplementary materials for more results. The experimental demonstrates that our network is able to generalize to DreamFusion-IF" }, { "figure_ref": [ "fig_3" ], "heading": "ProlificDreamer", "publication_ref": [], "table_ref": [], "text": "Shap-E ours \"a yorkie dog dressed as a maid\" \"a plush triceratops toy, studio lighting, high resolution\" \"a spanish galleon\" \"a knight holding a lance and sitting on an armored horse\" Figure 5. Qualitative comparisons. It demonstrates that DreamFusion tends to generate blurry 3D asset due to the SDS loss. While ProlificDreamer generates impressive 3D objects, it still fails on several text prompts. Our method generates view consistent 3D asset in good quality.\n\"a pig, cartoon style, exaggerated features\" \"a duck, photorealistic, extremely detailed\" \"a horse, neon lights, cybernetic implants\" \"a car, bright and bold colors, abstract\" \"a car, documentary realism, unscripted\" \"a car, primitive art, simple and symbolic\" unseen text prompts and delivers good results for both geometry and texture style compositions. Fig. 6 presents the textual embedding interpolation results. We linearly interpolate the latent vector of two text prompts and input it to the generator network. It demonstrates that the interpolation properties of GANs continue to be considerably smooth.\n\"a sheep, sustainable art style, eco-friendly materials, environmental themes\" \"a wolf, manga style, Japanese comics, black and white\" \"a giraffe, animate, high quality, cartoon style, exaggerated features, bright colors\" \"a koala, cinema verite style, documentary realism, unscripted\" \"a frog, retro-futurist style, futuristic visions from the past, vintage aesthetics\" \"a squirrel, riding a motorcycle, wearing a leather jacket, wearing a party hat\" \"a pig, holding a book, wearing a sweater, wearing a tophat\" Figure 7. Compositional generalization performance on unseen text prompts. It demonstrates that our network is able to generalize to unseen text prompts and generates high fidelity 3D assets." }, { "figure_ref": [], "heading": "Conclusion and Future Work", "publication_ref": [], "table_ref": [], "text": "We present a novel framework for efficient text-to-3D generation. Our network is built upon an unconditional 3D GAN network, and is trained via multi-view distillation of a pretrained text-to-multi-view model. Different from prior Score Distillation Sampling (SDS) based optimization methods, which usually requires large amount of computational resources to generate a 3D asset, we are able to generate a 3D object in only 8 ms once the network is trained. Due to the available resources, we currently only trained our network with a small amount of text prompts. Even with such limited number of text prompts, our network exhibit good generalization performance to unseen text input. It demonstrates the great potential of our framework for large-scale efficient text-to-3D generation task." } ]
a) "a stone bust of tiger" (b) "a pig, graffiti colors" (c) "... photorealistic" "... robot" "... cartoon" "... sculpture" (d) "... disney style" → "... robot" Figure 1. ET3D specializes in the efficient generation of 3D objects from text input, offering capabilities such as (a) producing multiviewconsistent 3D objects conditioned on textual input, (b) generating diverse 3D objects with identical text and distinct latent inputs, (c) enabling style control in the output through text, and (d) facilitating smooth interpolations between prompts.
ET3D: Efficient Text-to-3D Generation via Multi-View Distillation
[ { "figure_caption": "They can then render multi-view images from NeRF, and train the mapping network by SDS loss computed via a pretrained 2D diffusion model. Once the network is trained, they are able to achieve efficient text-to-3D generation via a simple feed-forward pass. Due to characteristic of the used SDS loss, their method suffers from a lack of diversity and shared limitations with prior SDS-based works [45]. Another inspiring work is StyleAvatar3D [70], they exploit a pre-trained ControlNet [71] to generate multi-view images given a prior 3D head model and use those images to retrain EG3D [6] for 3D head generation. Since they require an existing 3D model for multi-view image generation, it is difficult for them to scale to general text-to-3D generation.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3. The pipeline of ET3D. Our framework consists of a pretrained teacher model and a trainable student model. The student model learns to generate text controlled 3D asset via multi-view distillation of the teacher model. During training, both models receive the same text prompts. The teacher model is able to generate multi-view images. To train the student model, a discriminator network is used to supervise the generator network of the student model, to generate 3D content that has the same distribution as the teacher model in terms of rendered multi-view images. Once the student model is trained, it can generate 3D content in only 8 ms on a consumer graphic card.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Ablation study on the text condition. It demonstrates that the generator fails to learn proper text control if there is no text condition to be applied on the discriminator network. EG3D struggles to learn from such a large dataset containing many different types of objects. It results in 3D objects with multiple heads.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Text embedding interpolation results. The experimental results demonstrate that we can linearly interpolate between two text prompt embeddings. It shows the interpolation properties of GANs continue to be considerably smooth.", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Ablation study. We study the effects of the CLIP loss and the text condition on the discriminator. The results demonstrate that the generator network struggles to learn text controlled generation without applying the text condition to the discriminator network. Due to the text conditioned discriminator already has strong capability to supervise text consistent 3D asset generation, the effect of CLIP loss in text control is marginal. However, it results in better FID score. We thus still adopt it for our training.", "figure_data": "FID↓ ViT-L/14↑ ViT-bigG-14↑Ours7.40.3050.45w/o clip-loss7.70.3170.452w/o D text cond 12.10.1880.322EG3D11.3××", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Quantitative Comparison. We compare ET3D against several prior state-of-the-art methods, including DreamFusion, ProlificDreamer and Shap-E. It demonstrates that ET3D is able to generate similar 3D content in terms of the CLIP score. However, its generation speed is greatly improved compared to those methods, e.g. 225000 times faster than DreamFusion.", "figure_data": "Time(s)↓DreamFusion-IF 0.2970.4171800ProlificDreamer0.3340.4473600Shap-E0.2650.3475Ours0.3220.4270.008", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Yiming Chen; Zhiqi Li; Peidong Liu
[ { "authors": "Francesc Moreno-Noguer; Albert Pumarola; Stefan Popov; Vittorio Ferrari", "journal": "", "ref_id": "b0", "title": "C-Flow: Conditional generative flow models for images and 3D point clouds", "year": "2020" }, { "authors": "Pashmina Sadegh Aliakbarian; Federica Cameron; Andrew Bogo; Thomas J Fitzgibbon; Cashman", "journal": "", "ref_id": "b1", "title": "FLAG: Flowbased 3D Avatar generation from sparse observations", "year": "2022" }, { "authors": "Eelena Balashova; Vivek Singh; Jiangping Wang; Brian Teixeira; Terrence Chen; Thomas Funkhouser", "journal": "", "ref_id": "b2", "title": "Structureaware shape synthesis", "year": "2018" }, { "authors": "Andrew Brock; Theodore Lim; J M Ritchie; Nick Weston", "journal": "", "ref_id": "b3", "title": "Generative and discriminative voxel modeling with convolutional neural networks", "year": "2016" }, { "authors": "Connor Z Eric R Chan; Matthew A Lin; Koki Chan; Boxiao Nagano; Shalini De Pan; Orazio Mello; Leonidas J Gallo; Jonathan Guibas; Sameh Tremblay; Khamis", "journal": "", "ref_id": "b4", "title": "Efficient geometry-aware 3d generative adversarial networks", "year": "2022" }, { "authors": "Eric R Chan; Connor Z Lin; Matthew A Chan; Koki Nagano; Boxiao Pan; Shalini De Mello; Orazio Gallo; Leonidas Guibas; Jonathan Tremblay; Sameh Khamis; Tero Karras; Gordon Wetzstein", "journal": "", "ref_id": "b5", "title": "Efficient Geometry Aware 3D Generative Adversarial Networks", "year": "2022" }, { "authors": "Eric R Chan; Marco Monteiro; Petr Kellnhofer; Jiajun Wu; Gordon Wetzstein", "journal": "", "ref_id": "b6", "title": "pi-gan: Periodic implicit generative adversarial networks for 3d-aware image synthesis", "year": "2021-06" }, { "authors": "X Angel; Thomas Chang; Leonidas Funkhouser; Pat Guibas; Qixing Hanrahan; Zimo Huang; Silvio Li; Manolis Savarese; Shuran Savva; Hao Song; Jianxiong Su; Li Xiao; Fisher Yi; Yu", "journal": "", "ref_id": "b7", "title": "ShapeNet: An Information-Rich 3D Model Repository", "year": "2015" }, { "authors": "Rui Chen; Yongwei Chen; Ningxin Jiao; Kui Jia", "journal": "", "ref_id": "b8", "title": "Fantasia3d: Disentangling geometry and appearance for high-quality text-to-3d content creation", "year": "2023" }, { "authors": "Zhiqin Chen; Hao Zhang", "journal": "", "ref_id": "b9", "title": "Learning implicit fields for generative shape modeling", "year": "2019" }, { "authors": "Gene Chou; Yuval Bahat; Felix Heide", "journal": "", "ref_id": "b10", "title": "DiffusionSDF: Conditional generative modeling of signed distance functions", "year": "2023" }, { "authors": "Thomas Hofmann; Dario Pavllo; Jonas Kohler; Aurelien Lucchi", "journal": "", "ref_id": "b11", "title": "Learning generative models of textured 3D meshes from real-world images", "year": "2021" }, { "authors": "Matt Deitke; Dustin Schwenk; Jordi Salvador; Luca Weihs; Oscar Michel; Eli Vanderbilt; Ludwig Schmidt; Kiana Ehsani; Aniruddha Kembhavi; Ali Farhadi", "journal": "", "ref_id": "b12", "title": "Objaverse: A universe of annotated 3d objects", "year": "2023" }, { "authors": "Matheus Gadelha; Subhransu Maji; Rui Wang", "journal": "IEEE", "ref_id": "b13", "title": "3d shape induction from 2d views of multiple objects", "year": "2017" }, { "authors": "Lin Gao; Jie Yang; Tong Wu; Yujie Yuan; Hongbo Fu; Yukun Lai; Hao Zhang", "journal": "", "ref_id": "b14", "title": "SDM-Net: Deep generative network for structured deformable mesh", "year": "2019" }, { "authors": "Ian Goodfellow; Jean Pouget-Abadie; Mehdi Mirza; Bing Xu; David Warde-Farley; Sherjil Ozair; Aaron Courville; Yoshua Bengio", "journal": "", "ref_id": "b15", "title": "Generative adversarial nets", "year": "2014" }, { "authors": "Jiatao Gu; Lingjie Liu; Peng Wang; Christian Theobalt", "journal": "", "ref_id": "b16", "title": "Stylenerf: A style-based 3d-aware generator for highresolution image synthesis", "year": "2021" }, { "authors": "Ben Heli; Haggai Hamu; Itay Maron; Gal Kezurer; Yaron Avineri; Lipman", "journal": "", "ref_id": "b17", "title": "Multi-chart generative surface modeling", "year": "2018" }, { "authors": "Zekun Hao; Arun Mallya; Serge Belongie; Ming-Yu Liu", "journal": "", "ref_id": "b18", "title": "Gancraft: Unsupervised 3d neural rendering of minecraft worlds", "year": "2021" }, { "authors": "Philipp Henzler; J Niloy; Tobias Mitra; Ritschel", "journal": "", "ref_id": "b19", "title": "Escaping plato's cave: 3d shape from adversarial rendering", "year": "2019" }, { "authors": "Yukun Huang; Jianan Wang; Yukai Shi; Xianbiao Qi; Zheng-Jun Zha; Lei Zhang", "journal": "", "ref_id": "b20", "title": "Dreamtime: An improved optimization strategy for text-to-3d content creation", "year": "2023" }, { "authors": "Kahei Hui; Ruihui Li; Jingyu Hu; Chi-Wing Fu", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b21", "title": "Neural wavelet-domain diffusion for 3D shape generation", "year": "2022" }, { "authors": "Gabriel Ilharco; Mitchell Wortsman; Ross Wightman; Cade Gordon; Nicholas Carlini; Rohan Taori; Achal Dave; Vaishaal Shankar; Hongseok Namkoong; John Miller; Hannaneh Hajishirzi; Ali Farhadi; Ludwig Schmidt", "journal": "", "ref_id": "b22", "title": "Openclip", "year": "2021-07" }, { "authors": "Ajay Jain; Ben Mildenhall; Jonathan T Barron; Pieter Abbeel; Ben Poole", "journal": "", "ref_id": "b23", "title": "Zero-shot text-guided object generation with dream fields", "year": "2022" }, { "authors": "Heewoo Jun; Alex Nichol", "journal": "", "ref_id": "b24", "title": "Shap-e: Generating conditional 3d implicit functions", "year": "2023" }, { "authors": "Tero Karras; Samuli Laine; Miika Aittala; Janne Hellsten; Jaakko Lehtinen; Timo Aila", "journal": "", "ref_id": "b25", "title": "Analyzing and improving the image quality of StyleGAN", "year": "2020" }, { "authors": "Mohammad Nasir; Tianhao Khalid; Eugene Xie; Popa Belilovsky; Tiberiu", "journal": "", "ref_id": "b26", "title": "Clip-mesh: Generating textured meshes from text using pretrained image-text models", "year": "2022-12" }, { "authors": "Roman Klokov; Edmond Boyer; Jakob Verbeek", "journal": "", "ref_id": "b27", "title": "Discrete point flow networks for efficient point cloud generation", "year": "2020" }, { "authors": "Weiyu Li; Rui Chen; Xuelin Chen; Ping Tan", "journal": "", "ref_id": "b28", "title": "Sweetdreamer: Aligning geometric priors in 2d diffusion for consistent text-to-3d", "year": "2023" }, { "authors": "Yiyi Liao; Katja Schwarz; Lars Mescheder; Andreas Geiger", "journal": "", "ref_id": "b29", "title": "Towards unsupervised learning of generative models for 3d controllable image synthesis", "year": "2020" }, { "authors": "Chen-Hsuan Lin; Jun Gao; Luming Tang; Towaki Takikawa; Xiaohui Zeng; Xun Huang; Karsten Kreis; Sanja Fidler; Ming-Yu Liu; Tsung-Yi Lin", "journal": "", "ref_id": "b30", "title": "Magic3d: High-resolution text-to-3d content creation", "year": "2023" }, { "authors": "Jonathan Lorraine; Kevin Xie; Xiaohui Zeng; Chen-Hsuan Lin; Towaki Takikawa; Nicholas Sharp; Tsung-Yi Lin; Ming-Yu Liu; Sanja Fidler; James Lucas", "journal": "", "ref_id": "b31", "title": "ATT3D: Amortized Text-to-3D Object Synthesis", "year": "2023" }, { "authors": "Shitong Luo; Wei Hu", "journal": "", "ref_id": "b32", "title": "Diffusion Probabilistic Models for 3D Point Cloud Generation", "year": "2021" }, { "authors": "Lars Mescheder; Sebastian Nowozin; Andreas Geiger", "journal": "", "ref_id": "b33", "title": "Which training methods for gans do actually converge", "year": "2018" }, { "authors": "Lars Mescheder; Michael Oechsle; Michael Niemeyer; Sebastuan Nowozin; Andreas Geiger", "journal": "", "ref_id": "b34", "title": "Occupancy Networks: Learning 3D reconstruction in function space", "year": "2019" }, { "authors": "Mohammad Nasir; Tianhao Khalid; Eugene Xie; Tiberiu Belilovsky; Popa", "journal": "", "ref_id": "b35", "title": "Clip-mesh: Generating textured meshes from text using pretrained image-text models", "year": "2022" }, { "authors": "Norman Muller; Yawar Siddiqui; Lorenzo Porzi; Samuel Rota Bulo; Peter Kontschieder; Matthias Niebner", "journal": "", "ref_id": "b36", "title": "DiffRF: Rendering-Guided 3D Radiance Field Diffusion", "year": "2023" }, { "authors": "Thu Nguyen-Phuoc; Chuan Li; Lucas Theis; Christian Richardt; Yong-Liang Yang", "journal": "", "ref_id": "b37", "title": "Hologan: Unsupervised learning of 3d representations from natural images", "year": "2019" }, { "authors": "Christian Thu H Nguyen-Phuoc; Long Richardt; Yongliang Mai; Niloy Yang; Mitra", "journal": "Advances in neural information processing systems", "ref_id": "b38", "title": "Blockgan: Learning 3d object-aware scene representations from unlabelled images", "year": "2020" }, { "authors": "Alex Nichol; Heewoo Jun; Prafulla Dhariwal; Pamela Mishkin; Mark Chen", "journal": "", "ref_id": "b39", "title": "PointE: A system for generating 3D point clouds from complex prompts", "year": "2022" }, { "authors": "Michael Niemeyer; Andreas Geiger", "journal": "IEEE", "ref_id": "b40", "title": "Giraffe: Representing scenes as compositional generative neural feature fields", "year": "2021-06" }, { "authors": "Ioannis Mitliagkas; Panos Achlioptas; Olga Diamanti; Leonidas Guibas", "journal": "", "ref_id": "b41", "title": "Learning representations and generative models for 3D point clouds", "year": "2018" }, { "authors": "Jeong Joon Park; Peter Florence; Julian Straub; Richard Newcombe; Steven Lovegrove", "journal": "", "ref_id": "b42", "title": "DeepSDF: Learning continuous signed distance functions for shape representation", "year": "2019" }, { "authors": "Thu Nguyen Phuoc; Christian Richardt; Long Mai; Yongliang Yang; Niloy Mitra", "journal": "", "ref_id": "b43", "title": "BlockGAN: Learning 3D object-aware scene representations from unlabeled images", "year": "2020" }, { "authors": "Ben Poole; Ajay Jain; Jonathan T Barron; Ben Mildenhall", "journal": "", "ref_id": "b44", "title": "Dreamfusion: Text-to-3d using 2d diffusion", "year": "2022" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b45", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Amit Raj; Srinivas Kaza; Ben Poole; Michael Niemeyer; Ben Mildenhall; Nataniel Ruiz; Shiran Zada; Kfir Aberman; Michael Rubenstein; Jonathan Barron; Yuanzhen Li; Varun Jampani", "journal": "", "ref_id": "b46", "title": "Dreambooth3d: Subject-driven text-to-3d generation", "year": "2023" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily L Denton; Kamyar Ghasemipour; Raphael Gontijo Lopes; Burcu Karagol Ayan; Tim Salimans", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b47", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Aditya Sanghi; Hang Chu; Joseph G Lambourne; Ye Wang; Chinyi Cheng; Marco Fumero; Kamal Rahimi Malekshan", "journal": "", "ref_id": "b48", "title": "CLIP-Forge: Towards Zero-Shot Text-to-Shape Generation", "year": "2022" }, { "authors": "Axel Sauer; Tero Karras; Samuli Laine; Andreas Geiger; Timo Aila", "journal": "", "ref_id": "b49", "title": "StyleGAN-T: Unlocking the power of GANs for fast large-scale text-to-image synthesis", "year": "2023" }, { "authors": "Christoph Schuhmann; Romain Beaumont; Richard Vencu; Cade Gordon; Ross Wightman; Mehdi Cherti; Theo Coombes; Aarush Katta; Clayton Mullis; Mitchell Wortsman", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b50", "title": "Laion-5b: An open large-scale dataset for training next generation image-text models", "year": "2022" }, { "authors": "Katja Schwarz; Yiyi Liao; Michael Niemeyer; Andreas Geiger", "journal": "", "ref_id": "b51", "title": "Graf: Generative radiance fields for 3d-aware image synthesis", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b52", "title": "", "year": "2020" }, { "authors": "Katja Schwarz; Axel Sauer; Michael Niemeyer; Yiyi Liao; Andreas Geiger", "journal": "", "ref_id": "b53", "title": "VoxGRAF: Fast 3D-aware image synthesis with sparse voxel grids", "year": "2022" }, { "authors": "Yichun Shi; Peng Wang; Jianglong Ye; Mai Long; Kejie Li; Xiao Yang", "journal": "", "ref_id": "b54", "title": "Mvdream: Multi-view diffusion for 3d generation", "year": "2023" }, { "authors": "Dong Wook Shu; Sung Woo Park; Junseok Kwon", "journal": "", "ref_id": "b55", "title": "3D point cloud generative adversarial network based on tree structured graph convolutions", "year": "2019" }, { "authors": "Attila Szabó; Givi Meishvili; Paolo Favaro", "journal": "", "ref_id": "b56", "title": "Unsupervised generative 3d shape learning from natural images", "year": "2019" }, { "authors": "Qingyang Tan; Lin Gao; Yukun Lai; Shihong Xia", "journal": "", "ref_id": "b57", "title": "Variational autoencoders for deforming 3D mesh models", "year": "2018" }, { "authors": "Jiaxiang Tang; Jiawei Ren; Hang Zhou; Ziwei Liu; Gang Zeng", "journal": "", "ref_id": "b58", "title": "Dreamgaussian: Generative gaussian splatting for efficient 3d content creation", "year": "2023" }, { "authors": "Shitao Tang; Fuyang Zhang; Jiacheng Chen; Peng Wang; Yasutaka Furukawa", "journal": "", "ref_id": "b59", "title": "MVDiffusion: enabling holistic multiview image generation with correspondence aware diffusion", "year": "2023" }, { "authors": "Can Wang; Menglei Chai; Mingming He; Dongdong Chen; Jing Liao", "journal": "", "ref_id": "b60", "title": "CLIP-NeRF: Text-to-image driven manipulation of neural radiance fields", "year": "2023" }, { "authors": "Haochen Wang; Xiaodan Du; Jiahao Li; Raymond A Yeh; Greg Shakhnarovich", "journal": "", "ref_id": "b61", "title": "Score Jacobian Chaining: lifting pretrained 2D diffusion models for 3D generation", "year": "2023" }, { "authors": "Tengfei Wang; Bo Zhang; Ting Zhang; Shuyang Gu; Jianmin Bao; Tadas Baltrusaitis; Jingjing Shen; Dong Chen; Fang Wen; Qifeng Chen; Baining Guo", "journal": "", "ref_id": "b62", "title": "Rodin: A generative model for sculpting 3D digital Avatars using diffusion", "year": "2023" }, { "authors": "Zhengyi Wang; Cheng Lu; Yikai Wang; Fan Bao; Chongxuan Li; Hang Su; Jun Zhu", "journal": "", "ref_id": "b63", "title": "Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation", "year": "2023" }, { "authors": "Jiajun Wu; Chengkai Zhang; Tianfan Xue; Bill Freeman; Josh Tenenbaum", "journal": "Advances in neural information processing systems", "ref_id": "b64", "title": "Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling", "year": "2016" }, { "authors": "Zhijie Wu; Xiang Wang; Di Lin; Dani Lischinski; Daniel Cohen-Or; Hui Huang", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b65", "title": "SAGNet: Structure aware generative network for 3D shape modeling", "year": "2019" }, { "authors": "Pieter Peers; Xiao Li; Yue Dong; Xin Tong", "journal": "", "ref_id": "b66", "title": "Synthesizing 3D shapes from silhouette image collections using multiprojection generative adversarial networks", "year": "2019" }, { "authors": "Guandao Yang; Xun Huang; Zekun Hao; Mingyu Liu; Serge Belongie; Bharath Hariharan", "journal": "", "ref_id": "b67", "title": "PointFlow: 3D Point Cloud Generation with Continuous Normalizing Flows", "year": "2019" }, { "authors": "Kim Youwang; Kim Ji-Yeon; Tae-Hyun Oh", "journal": "", "ref_id": "b68", "title": "CLIP-Actor: text driven recommendation and stylization for animating human meshes", "year": "2022" }, { "authors": "Xiaohui Zeng; Arash Vahdat; Francis Williams; Zan Gojcic; Or Litany; Sanja Fidler; Karsten Kreis", "journal": "", "ref_id": "b69", "title": "LION: latent point diffusion models for 3D shape generation", "year": "2022" }, { "authors": "Chi Zhang; Yiwen Chen; Yijun Fu; Zhenglin Zhou; Y U Gang; Billzb Wang; Bin Fu; Tao Chen; Guosheng Lin; Chunhua Shen", "journal": "", "ref_id": "b70", "title": "StyleAvatar3D: Leveraging Image-Text Diffusion Models for High-Fidelity 3D Avatar Generation", "year": "2023" }, { "authors": "Lvmin Zhang; Anyi Rao; Maneesh Agrawala", "journal": "", "ref_id": "b71", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" }, { "authors": "Linqi Zhou; Yilun Du; Jiajun Wu", "journal": "", "ref_id": "b72", "title": "3D Shape Generation and Completion through Point-Voxel Diffusion", "year": "2021" }, { "authors": "Peng Zhou; Lingxi Xie; Bingbing Ni; Qi Tian", "journal": "", "ref_id": "b73", "title": "Cips-3d: A 3d-aware generator of gans based on conditionallyindependent pixel synthesis", "year": "2021" }, { "authors": "Joseph Zhu; Peiye Zhuang", "journal": "", "ref_id": "b74", "title": "Hifa: High-fidelity textto-3d with advanced diffusion guidance", "year": "2023" }, { "authors": "Jun-Yan Zhu; Zhoutong Zhang; Chengkai Zhang; Jiajun Wu; Antonio Torralba; Josh Tenenbaum; Bill Freeman", "journal": "Advances in neural information processing systems", "ref_id": "b75", "title": "Visual object networks: Image generation with disentangled 3d representations", "year": "2018" } ]
[ { "formula_coordinates": [ 5, 97.11, 139.14, 189.25, 30.32 ], "formula_id": "formula_0", "formula_text": "F(x) = n i=1 T i (1 -exp(-σ i δ i ))c i ,(1)" }, { "formula_coordinates": [ 5, 65.67, 199.32, 104.5, 14.11 ], "formula_id": "formula_1", "formula_text": "T i = exp(- i-1 k=1 σ k δ k )" }, { "formula_coordinates": [ 5, 57.59, 515.11, 228.78, 26.1 ], "formula_id": "formula_2", "formula_text": "L(θ, ϕ) = E I∈p D (f (D ϕ (I, ξ I , t)) -λ ∥∇D ϕ (I, ξ I , t)∥ 2 ) (2) + E z∈N (0,1),ξ,ξ ′ ∈p ξ ,t [f (-D ϕ (G θ (z, ξ, ξ ′ , K, t), ξ ′ , t))], (3)" }, { "formula_coordinates": [ 5, 94.37, 651.28, 191.99, 14.66 ], "formula_id": "formula_3", "formula_text": "min θ max ϕ L(θ, ϕ) + λ c L clip (θ),(4)" }, { "formula_coordinates": [ 5, 92.71, 670.77, 193.65, 12.2 ], "formula_id": "formula_4", "formula_text": "L clip (θ) = arccos 2 (enc i ( Ĩ), enc t (t)),(5)" } ]
2024-03-19
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b52", "b61", "b81", "b26", "b65", "b70", "b28", "b29", "b27", "b26", "b65", "b70", "b57", "b65", "b57", "b65", "b71", "b58", "b78", "b17", "b45", "b50", "b61", "b83", "b87" ], "table_ref": [], "text": "We are witnessing a renewed excitement in visual question answering (VQA), the task of answering questions about images, with recent successes of large foundation models in zero-shot and few-shot settings. Moreover, many papers are reporting that such models can achieve or even exceed human performance [53,62,82].\nWhile modern VQA datasets enabled this exciting progress in VQA model development, they have two key limitations. First, most publicly-available VQA datasets [27,66,71] are artificially-constructed to probe model performance rather stemming from authentic use cases. Yet, repeatedly, prior work [29,30,85] has revealed that models developed for artificially-constructed settings generalize poorly to authentic use cases due to significant domain shifts. The second key limitation of existing datasets lies in a limited diversity of content representing authentic use cases. The only VQA dataset reflecting an authentic use case is VizWiz-VQA [28]. However, VizWiz-VQA is constrained to one population, specifically blind individuals seeking assistance in learning about their visual surroundings. A further limitation of the dataset is that the answers were collected years later to support model benchmarking, rather than having been validated as acceptable by those who asked the questions. [27,66,71]. VQAonline is the first VQA dataset to originate from an authentic use case end to end, including with authentic context, answers, and topics/categories labels. It also is the first VQA dataset from an online question answering platform. A critical distinction of VQAonline, which necessitates a new evaluation methodology, is its lengthy answers. (Q=question, C=context, A=answer).\ncontext, and answer validated by the individual asking the question. We conduct extensive analysis to reveal unique aspects of our dataset (Section 3.2).\nOur work also aligns with VQA datasets providing context that supplements each visual question: Context-VQA [58] and ScienceQA [66]. However, context in Context-VQA [58] is defined as the kind of website the image is sourced from rather than our definition of supplementary data that supports answering the visual question. More similar to our work is ScienceQA [66], however the context is both artificially-generated and has distinct compositions and content, as discussed in Section 3.2.\nCommunity Question Answering (CQA) Datasets. Numerous datasets have been developed around CQA. However, all focus on text-based question answering [72] or natural language questions with multimedia answers [59]. To our knowledge, our work is the first to introduce a community multimodal question answering dataset specifically for visual question answering.\nVQA Models. Traditional VQA models generate short answers [79] and do not consider extra context to help answer the questions. Yet, our dataset features answers that are lengthy, typically containing multiple sentences, and provides additional context that can help answer visual questions. Accordingly, we benchmark large Vision and Language Models (VLMs) [18,46,51,62,84,88] which, by their generative design, are bettersuited to generate long-form answers using their strong natural language understanding capabilities. Overall, we find these models perform poorly and provide fine-grained analysis to reveal opportunities for improving such models." }, { "figure_ref": [], "heading": "VQAonline Dataset", "publication_ref": [], "table_ref": [], "text": "We now introduce our VQAonline dataset originating from an authentic use case where people used a web-based platform to post and answer visual questions." }, { "figure_ref": [ "fig_1" ], "heading": "Dataset Creation", "publication_ref": [], "table_ref": [], "text": "Source. We created our dataset by scraping data from the online question-answering platform Stack Exchange. We used the publicly-available data that spanned from August 2010 to November 2022, and adhered to all data license restrictions (cc-by-sa 4.0) set forth by Stack Exchange.\nFrom the Stack Exchange archive, we collected all examples containing four components: a natural language question, a natural language context, an image, and at least one answer. Originally, this information was collected from Stack Exchange users via four user entry interactions. The natural language question was collected as the post \"title\", using a text entry field paired with instructions \"be specific and imagine you're asking a question to another person\" when writing the question. The image was collected as a URL link. The natural language context was provided via a \"body\" text entry field supporting inline images (via a URL link) with users instructed to supply \"all the information someone would need to answer your question\". These three pieces of information were then shared publicly, after which individuals on the site could publicly provide answers. 4 Finally, individuals who posted the visual questions could mark which answer was the \"accepted answer\"; we only use examples for which individuals did this.\nOf note, the Stack Exchange platform has two key mechanisms that contribute to high quality VQAs. First, the platform enforces a stringent reputation system with moderation policies, as documented at https://meta.stackexchange.com/help. Second, it features topic-specific sites for users to engage with a relevant community; e.g., music.stackexchange.com and gardening.stackexchange.com. We used the data from 105 topical sites that belong to five super-categories, with 46 about culture & recreation (e.g., travel, sports, Christianity, French language), 25 about life & arts (e.g., physical fitness, parenting, pets), 24 about science (e.g., physics, chemistry, biology), 7 about being professional, and 3 about business. 5 All topics are shown in Figure 2. Filtering. We conducted two filtering steps to obtain our final dataset of 64,696 VQAs. First, we removed low-quality VQAs based on Stack Exchange's feature that enables users to up-vote or down-vote each post (either question or answer). Each up-vote added 1 to the score and each down-vote subtracted 1 from the score. We removed visual questions with scores 0 or less for either the question or answer. Second, we excluded visual questions with multiple images (e.g., GIF) or with visual answers. \nDataset" }, { "figure_ref": [], "heading": "Dataset Analysis", "publication_ref": [ "b26", "b57", "b65", "b28", "b14", "b53", "b55", "b54" ], "table_ref": [], "text": "Comparison to Existing Datasets. To contextualize how our dataset compares to the current focus of the research community, we compare VQAs in our dataset to those Table 1: Characterization of existing VQA datasets and our new VQAonline in terms of mean question length (i.e., Q), mean answer length (i.e., A), number of images (i.e., Nimg), number of authentic topics (i.e., # Auth. Topics), inclusion of context, inclusion of authentic visual questions (i.e., Auth. VQ), inclusion of authentic context (i.e., Auth. C), and inclusion of answers validated by those asking the questions (i.e., Vld. A)). in eight existing popular VQA datasets. We chose VQAv2 [27] because it is the most popular (e.g., highly cited) VQA dataset, Context-VQA [58] and ScienceQA [66] 6 because they similarly claim to provide context, VizWiz-VQA [29] because it similarly originates from an authentic use case, INFOSEEK [15] because it similarly centers on information seeking rather than probing an AI model, OK-VQA [54] because it similarly requires domain-specific knowledge, as well as DocVQA [56] and Infographic VQA [55] because they similarly contain VQAs with documents and infographics." }, { "figure_ref": [ "fig_1", "fig_3", "fig_3" ], "heading": "VQA Dataset", "publication_ref": [ "b2", "b28", "b65", "b28", "b30", "b22" ], "table_ref": [], "text": "Q A Nimg # Auth.\nFor each dataset, we report the mean number of words in each question, mean number of words in each answer, number of images, number of topics which were authentically used to label clusters of VQAs by theme, whether context is included, whether the visual questions are authentic, whether the context is authentic, and whether the answers are validated by those asking the questions. Results are reported in Table 1.\nOur findings reveal commonalities of our VQAonline dataset with existing datasets. For example, we observe that the visual questions themselves are similar in terms of the typical question length and number of images. Questions have a mean length of 9 words in our VQAonline compared to 6 to 11 words in other datasets and the nearly 65,000 images in our dataset are comparable in size to existing datasets.\nOur analysis also demonstrates significant differences from existing datasets across two key dimensions. The first dimension is the dataset authenticity. Our VQAonline dataset is the only fully authentic dataset for the entire VQA pipeline, meaning all contents originate from an authentic use case. The second key difference of our dataset is the typical answer length. Answers in our VQAonline dataset are orders of magnitude longer than in existing datasets, with a mean length of 173 words in VQAonline versus 11 or fewer words in existing datasets. This larger length is not anomalous, as the median answer length in VQAonline is 120 words. This longer length in part reflects responses are provided in paragraph form, with most commonly 3 sentences per answer. This distinction renders the mainstream evaluation metric for VQA models largely ineffective, as it assumes brief responses [3]. This issue will be explored in Section 4.2.\nVQAonline Topics. The authentic user-based posting of each VQA within a topic community reveals the diversity of content in our dataset. We report the number of VQAs in each of the 105 topics in Figure 2. As shown, there can be as few as 10 examples in a topic to over 20,000. VQAonline Questions. Our VQAonline dataset is the first with authentic visual questions asked by users of an online question-answering platform. This results in a domain shift from the only other VQA dataset with authentically asked visual questions, VizWiz-VQA [29], which were asked by blind users of a mobile phone application. For instance, blind people typically asked about their physical surroundings (e.g., \"What is this?\", \"What does this say?\") while Stack Exchange users often asked for more fine-grained or specialized information about the images (e.g., \"What's a better term to describe these 'stripes' in blouse (Screen shot attached)\", \"What is the third character on this sign seen in Taipei?\"). Another distinction stems from Stack Exchange's feature for avoiding redundant questions by letting users remove duplicates of other already submitted questions. The consequence is that only 0.2% of natural language questions in our VQAonline dataset are duplicated compared to 37.9% of questions in VizWiz-VQA. VQAonline Context. Our VQAonline dataset is the first to include authentic context. The only other VQA dataset containing context, as defined by supplementary data that supports answering the visual question, is ScienceQA [66]. However, the context was contrived with heuristic rules for extracting data from elementary and high school science curricula. Consequently, context in ScienceQA is typically narrated in thirdperson, while context in our VQAonline often are narrated in first-person. Another difference is ScienceQA has a shorter average length of the context compared to that in our dataset; i.e., a mean of 41 words, median of 28 words, and most commonly 3 sentences versus a mean of 127 words, median of 94 words, and most commonly 4 sentences. A further difference is that the context in ScienceQA only relates to 26 science topics while the context in our dataset spans 105 topics that extend beyond science. VQAonline Image Contents. We next quantify the tendency for images to include screenshots and infographics to reflect the tendency for visual questions to be related to the digital realm and data analysis. We identified an image as a \"screenshot\" when contents were copied digitally from a computing device and an \"infographic\" when the image shows a chart, table, graph, attention map, map, or diagram. We manually labeled 105 images sampled from each of the 105 topics as well as an independent random selection of 100 images from the entire dataset. We found for the topic-sampled set that 79% are screenshots and 17% show infographics and for the samples from the entire dataset that 84% are screenshots and 33% show infographics. The similarly high prevalence of screenshots across both samples underscores Stack Exchange users' tendency to ask questions about the digital realm. We hypothesize the different prevalence of infographics across both samples is due to topical differences, as 52% of all VQAs (i.e., 33,384 out of 64,696) belong to only 23% of the topics (i.e., 24 science topics). Supporting this hypothesis, we found from inspecting 100 random VQAs from each subset of the science topics and non-science topics that infographics were in 58% and 6% of images respectively. VQAonline Importance of Images. We next examine the value of images for answering the questions. To do so, we flagged each image as question-aligned if elements described in the question are found in the image and context-aligned if elements described in the context are found in the image. We additionally flagged whether an image is necessary, based on whether the answer can be obtained without the image. We manually labeled 105 examples from each of the 105 topics. We found that 91.4% of the images are question-aligned and 99% of the images are context-aligned. This finding contrasts that of the only other dataset with authentic visual questions, VizWiz-VQA [29], where only 71% are deemed answerable because blind photographers struggled to capture contents of interest in their images. We also found that 65.7% images are necessary for answering the question in our dataset. Often, the purpose of unnecessary images was to motivate or show scenarios in which the asker encounters the question. Altogether, these findings underscore that VQA models often must analyze the images to arrive at the answers.\nVQAonline User Intentions. We finally characterized intents behind the visual questions, complementing two prior works that identified user intentions for three community question answering platforms (Answerbag, Metafilter, and Yahoo! Answer) [31] and the music StackExchange community [23]. Our work is the first to investigate user intentions for VQAs. Through a multi-round analysis of the data, described in the Supplementary Materials, we defined eight potential intents: instruction, evidence, verification, advice, identification, opinion, reason, and other. Examples for three of these categories are shown in Figure 3. We then recruited vetted crowdworkers from Amazon Mechanical Turk to label one primary intent per visual question when shown the question, image, context, and answer. They labeled 105 examples, with one randomly selected from each of the 105 topics. We used the majority vote label per VQA. The resulting distribution of assigned categories is 19% are instruction, 18% evidence, 17% verification, 16% advice, 10% identification, 10% opinion, 8% reason, and 1% other. Examples from the three most common categories are shown in Figure 3." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_4", "fig_4", "fig_4", "fig_5", "fig_4", "fig_5", "fig_5" ], "heading": "Model Benchmarking", "publication_ref": [ "b45", "b80", "b87", "b80", "b50", "b3", "b55", "b80", "b83", "b26", "b80", "b17", "b34", "b53", "b56", "b80", "b61", "b55", "b42", "b20", "b2", "b47", "b19", "b25", "b32", "b36", "b32", "b32", "b80", "b59" ], "table_ref": [ "tab_1", "tab_1", "tab_2", "tab_2", "tab_2" ], "text": "We next analyze the performance of modern models on our VQAonline dataset to reveal what makes it challenging for the research community.\nModels. We evaluate the following six Vision and Language Models (VLMs) that achieve strong performance for various VQA datasets and subtasks in a zero-shot setting:\n-BLIP-2 [46]: state-of-the-art (SOTA) for zero-shot setting on four VQA datasets, including ScienceQA and VizWiz-VQA according to LVLM-eHub, a comprehensive evaluation benchmark for VLMs [81]. -MiniGPT-4 [88]: ranked first in open-world VQA according to LLVM-eHub [81].\n-LLaVA [51]: top-two performance in the zero-shot setting for three VQA datasets, including for DocVQA [4,56] according to LLVM-eHub [81]. -mPLUG-Owl [84]: SOTA for VQAv2 [27] in early 2023 and ranked second overall in open-world VQA scenario according to LLVM-eHub [81].\n-InstructBLIP [18]: SOTA for zero-shot setting on four VQA datasets, including OK-VQA [5, 35,54,57] according to LLVM-eHub [81]. -GPT-4V [62]: SOTA for numerous VQA datasets, such as Infographic VQA [56] and TVQA [43]. While we aim to assess its performance, note that its reproducibility is a major concern due to its proprietary nature, which means the model can evolve without transparency or even cease to exist. Given that concern paired with its costliness (i.e., access is possible only through a fee-based API [21]), we evaluate GPT-4V only on a subset of data. Due to space constraints, we report key findings here in the main paper and all results in the Supplementary Materials. It is worth noting that most of these models are believed to be trained on Stack Exchange data and so potentially saw our VQAonline data 7 , with three (MiniGPT-4, LLaVA, and mPLUG-Owl) confirmed to have language encoders trained using Stack Exchange data 8 , three others (BLIP2, GPT-4V, InstructBLIP) suspected (the original source of parts of these models' training data are not publicly shared) . While models exposed to such data during training bring an unfair advantage, evaluation with such models is still valuable. In particular, poor performance from such models strengthens the argument that our dataset offers a difficult problem for the research community since they perform poorly despite their advantage. 9Evaluation Metrics. Given the deficiency of the existing VQA evaluation metric for our dataset, in that it is designed for short responses [3], we instead leverage five evaluation metrics for evaluating longer text from related fields (e.g. long-form question answering and image captioning). Like prior work, we evaluate with respect to multiple metrics to address a well-known issue that each brings deficiencies.\nThree metrics assess similarity between model-generated answers and humangenerated answers, with their values ranging from 0 (bad) to 1 (perfect). ROUGE-L [48] identifies the longest sequence of words shared between the reference and the predicted answer, and consequently neglects semantic similarity (e.g. synonyms), context (e.g. different meanings of a word based on context, such as for \"the bank of America\", \"the bank of a river\" would receive a higher score than \"bank account\"), and negation (e.g. for \"Exercise is good for human health\", \"Exercise is NOT good for human health\" would receive a higher score than \"Humans can benefit from exercise\")). METEOR [20] offers a more sophisticated utilizes more advanced techniques to assess word similarity between the reference and predicted answers, such as stemming and synonyms, but it also neglects context and negation. BERTscore [86], measures the similarity of BERT's contextual embeddings between the reference and predicted answers to incorporate consideration of context. However, context can reward semantically related but factually incorrect content as well as grammatically correct, well-written responses that are inaccurate [26].\nTwo metrics assess image compatibility between the image and model-generated answer, with their values ranging from 0 (bad) to 1 (perfect). CLIP-S [33], designed to assess the compatibility between an image and a caption by measuring the similarity between image and text embeddings, has been shown to have a positive correlation with human evaluations on answer faithfulness (i.e. if the answer can be grounded in the image versus is hallucinated) [37]. However, CLIP-S can reflect the biases of the pretraining data [33] and ignores the reference answers. RefCLIP-S [33] incorporates consideration of the reference answer by calculating a harmonic mean of the CLIP-S score and the cosine similarity between the reference answer and the predicted answer.\nOverall Results. Results are shown in Table 2. 10 We first examine the ranking of all models. The best-performing model is mPLUG-Owl, based on five of the six evaluation metrics. The next best models are LLaVA and MiniGPT-4, which consistently outperform the remaining two models BLIP2 and InstructBLIP across all evaluation metrics. This ranking of models was also found in prior work [81]. We hypothesize this observed ranking arises in part due to mPLUG-Owl, LLaVA, and MiniGPT-4 bringing disproportionately greater advantages in having previously seen Stack Exchange data and mPLUG-Owl and LLaVA having been finetuned with instruction-following data for VQA tasks. The one exception to this ranking is that LLaVA is the top-performing model with respect to the CLIP-S score followed 10 As noted earlier, results for GPT-4V on the dataset subset are provided in the Supplementary Materials. They show GPT-4V achieves the best performance followed closely by mPLUG-Owl. While the evaluation scores support ranking the models' performance, it is worth noting that the scores for each evaluation metric are compressed in a small range. We attribute this to a known limitation that such metrics perform best at distinguishing poor versus excellent models rather than between mediocre and good models [60]. We suspect all benchmarked models fall into this latter range, as we will show that evaluation scores can vary much more than in this compressed range for specific topics, signifying poor and excellent performance for subportions of the dataset (Figure 4).\nBest Performing Model in One-shot Setting. We explore for the best-performing model to what extent in-context learning examples can provide a further performance boost. We choose two types of training exemplars: (1) randomly selected and (2) matching topic tag. We explore two variants of both types, where we include versus exclude the images in the exemplars. We perform experiments in a one-shot setting and design prompts based on best practices established by prior work.\nResults are shown in Table 2. As shown, none of the one-shot examples enhance performance. This could be due to the model overfitting to or becoming distracted by the exemplar. This also may relate to the inherent diversity of VQAs within our datasets, where even VQAs within the same topic can require very different skills and knowledge. This concern is further supported by the fact that, overall, models benefit from a simpler exemplar that lacks images. Moreover, exemplars with matching topic tags consistently outperform randomly chosen ones, underscording that greater exemplar similarity to the test case has the potential to boost performance.\nAnalysis With Respect to Input Types. We next analyze the predictive power of each input of the top-performing model, mPLUG-Owl, by removing each independently: i.e., Q+C, Q+I, and C+I. Results are shown in Table 3.\nOverall, the best-performing model all the information (i.e., Q+C+I), and this is also observed for GPT-4V (see Supplementary Materials). This underscores a complementary benefit provided by each of the three input types.\nThe findings suggest that context (which is typically language-based) is the most valuable information source for arriving at the target answer. That is because the worstperforming variant across both models, according to five of the six evaluation metrics, lacks context as input (i.e., Q+I). The one exception is the results ranked by CLIP-S, which deems the model lacking context (i.e., Q+I) as the best-performing model. We attribute this conflicting finding from different evaluation metrics to the specifics of CLIP-S; specifically, it measures how compatible an answer is to an image and so removing the context likely enables models to predict answers that are more compatible to an image, even if less correct.\nThe findings also suggest that the questions' predictive power is nearly negligible, since models lacking questions as input (i.e., C+I) achieve nearly comparable performance to models receiving all inputs (Q+C+I). We suspect this is because often the information provided by the question is included in the context. For example, a question asking \"What test am I taking, anyways?\" while the part of the context includes \"can you tell me what field of math the test was on?\" Additionally, the findings suggest that the predictive power of the image is small, as models lacking images as input (i.e., Q+C) achieve nearly comparable performance to models receiving all inputs (Q+C+I). The one exception is evaluation with respect to CLIP-S scores, for which the worst-performing model lacks images as input (i.e., Q+C). As suggested through our small-scale study in Section 3.2, we suspect the low predictive power of images does not imply the images are irrelevant. Instead, we believe this argues for improving models so they can better utilize images when predicting answers.\nAnalysis With Respect to VQA Topics. We next analyze the influence of VQA topic on model performance. To do so, we report performance of the best model, mPLUG-Owl, with respect to the five super-categories (Table 3) and for the two top-performing models (mPLUG-Owl and LLaVA) with respect to the 105 topics (Figure 4). 11Overall, the models perform best on Science and worst on Culture & Recreation from the five super-categories (i.e., Table 3). We suspect this is due to differences in level of reliance different categories have on vision understanding, an area where prevailing VLMs fail, with less visual information needed for Science questions and more for Culture & Recreation.\nFor per-topic performance (i.e., Figure 4), we observe similar performance outcomes for mPLUG-Owl and LLaVA models. For example, a common outlier across all metrics except CLIP-S is high performance for some science-related topics (e.g., Computer Science, Mathematics, Economics, and Physics). Additionally, both models have consistently poor performance for the Pyccknn language and puzzling with respect to most evaluation metrics. An example of the puzzling topic with the top-performing model's prediction is shown in Figure 5, Example 1. We attribute the difficulty of puzzling to its joint need for strong vision understanding and reasoning skills. We also observe that both models have low performance for culture or religious-related topics (e.g., Hinduism, Mi Yodeya). Altogether, these findings suggest modern models may share many of the same strengths and weaknesses. For visualization simplicity, we show text labels only for topics with interesting identified trends (We omitted \"language\" for each language topic, such as \"Esperanto\" instead of \"Esperanto Language\", and we omitted topics with less than 10 data points, such as Esperanto and Community Building. Another trend we observe for specific topics (i.e., Figure 4) is different performance for topics related to different languages 12 . The models work best on visual questions related to Germanic Languages; e.g., English as exemplified in Figure 5, Example 2. The next best languages are Romance Languages (e.g., Spanish, French, Italian, Portuguese), then East Asian Languages (e.g., Chinese, Korean), and finally Pyccknn. We suspect causes of such consistent performance discrepancies across languages are: (1) Data availability: languages which have fewer datasets and less diversity (e.g., one exemplified in Figure 5, Example 3) perform worse than those with greater diversity and abundance of datasets (e.g., English), (2) Language Complexity: Languages with simpler grammar (e.g., Esperanto) or familiar grammatical structures (e.g., Romance languages for an English-trained model) are easier for models to learn, and (3) Text recognition can be more challenging for languages with unique scripts, such as Chinese or Korean while these language-related VQAs often pertain to text recognition (an example is shown in the Supplementary Materials). We believe these findings shed light on future research directions for multilingual VQA and text VQA research." }, { "figure_ref": [ "fig_4" ], "heading": "Qualitative Evaluation via Human Evaluation", "publication_ref": [ "b0", "b0", "b74", "b0" ], "table_ref": [ "tab_3", "tab_1", "tab_2" ], "text": "We next conduct a human evaluation to supplement imperfect quantitative evaluation metrics with more reliable qualitative assessments as well as to quantify how well the quantitative metrics align with human assessments.\nHuman Judgement Collection. We hired ten human annotators for an IRB-approved human study. Each individual was a domain expert in one of ten topics covered in our dataset (i.e., Physical Fitness, Botany, Music, Mechanics, Politics, Law, Chinese Language, Economics, Statistics, and Artificial Intelligence), with at least a Master's degree in a field related to their assigned VQA topic. These topics' frequency in our dataset span from very frequent (e.g. Stats) to moderate frequency (e.g. Economics) to less frequent (e.g. Law). Each domain expert was assigned 20 VQAs on their topic, and asked to rate the performance of six model-generated answers per VQA. As a result, we collected 1200 human judgments for 200 VQAs (i.e. 10 * 20 * 6).\nThen, each domain expert was shown for each VQA the original visual question, context, and answer alongside the six model-generated answers in a home-grown webbased annotation tool and asked to rate each model-generated answer as either incorrect (i.e., score of 0), partially correct (i.e, score of 0.5), or correct (i.e., score of 1). The definitions for these ratings are: Correct: At least one of these two items needs to be true for the answer to be correct: (1) Aligns with all the key points in the reference answer. (2) The answer also makes sense based on your best judgment, even if not included in the reference answer. Partially Correct: At least one of these two items needs to be true for the answer to be partially correct: (1) The answer matches part of the key points in the reference answer that answers the question but has errors in the content. (2) The answer is partially correct but not included in the reference answer, based on your best judgment. Incorrect: The response doesn't include any key points from the reference answer and doesn't make sense.\nHuman Assessment of Model Performance. Mean human ratings per model across the 200 VQAs (20 VQAs * 10 topics) are as follows: GPT-4V: 0.76, mPLUG-Owl: 0.57, LLaVA:0.5, MiniGPT-4: 0.37, InstructBLIP:0.12, and BLIP2: 0.1. These results reinforce those from our quantitative assessments in two ways: (1) scores suggest modern models still have room to improve and (2) the models' rankings are aligned. Alignment of Quantitative Metrics with Human Judgments. We next assess correlations of the five evaluation metrics we used as well as a newly proposed metric to the human judgments. We introduce the new metric to account for limitations of the five metrics, specifically accounting for valid responses not closely resembling the reference answer (in contrast to the similarity-based metrics) and enforcing factual accuracy (in contrast to the image compatibility-based measures). Following pilot testing, we chose to prompt LLaMA2 metric [75] to rate answers with respect to the reference answer by indicating if a model is totally wrong (0), partially correct (0.5), or totally correct (1). We measure correlations as follows: calculate the mean score per model across the 200 VQAs for humans and each evaluation metric and then calculate the Pearson correlation 13 between mean scores of humans and each evaluation metric.\nResults are shown in Table 4. Overall, we observe a high positive correlation between the metrics and human judgments with statistical significance (i.e. p-value less than 0.05). Metrics with the strongest correlation to human judgments are LLaMA2, followed by METEOR and BERTscore. While it's exciting to see the promise of the LLaMA metric, it's worth noting that such a model is impractical for large-scale use in its current form due to high computational costs. Nonetheless, we explore its extended use on a subset of around 2000 VQAs from VQAonline to show its potential implications on a larger scale and observe similar findings to those already reported (results are in the Supplementary Materials), specifically, the same model rankings as in Table 2, influence of different input types as in Table 3, and trends in easier versus harder topics noted in Figure 4. The major difference between LLaMA2 compared to the five mainstream metrics is that it yields a slightly broader score range (0.53 (BLIP2)-0.62 (mPLUG-Owl)). Given the potential of this metric to better align with human judgment, a valuable direction for future work is exploring more efficient versions to support such evaluation at scale.\nWe found CLIP-S is least aligned with humans from all metrics (although the results are not statistically significant). This offers some evidence for why this metric at times contradicted the other five metrics in our quantitative assessment: it focused on whether the answer is compatible with the image regardless of the answer's correctness." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "We introduce the first fully authentic VQA dataset. Our analysis reveals numerous distinct features of our new dataset, most importantly atypically long answers with inspires us to adopt six popular metrics for evaluating lengthy text. Our benchmarking of six modern VQA models reveal when and why modern models struggle. We also conducted human-based studies to qualitatively evaluate models and assess how our chosen metrics align with human judgments of answer correctness." }, { "figure_ref": [], "heading": "Supplementary Materials", "publication_ref": [], "table_ref": [], "text": "This document supplements the main paper with more information about:\n1. Dataset Collection (Supplements Section 3.1) 2. Dataset Analysis (Supplements Section 3.2) 3. Algorithm Benchmarking (Supplements Section 4.2)" }, { "figure_ref": [ "fig_6" ], "heading": "I Dataset Collection I.1 Dataset Source and Filtration", "publication_ref": [], "table_ref": [], "text": "The Stack Exchange data is hosted at https://archive.org/details/stackexchange in XML format. We started with 330,705 candidate visual questions. After removing visual questions without an accepted answer, it resulted in 165,766 visual questions. As mentioned in the main paper, we then conducted two filtering steps. After removing visual questions with scores of 0 or less for either the question or answer, we had 119,177 visual questions. Next, after removing visual questions with multiple images, we had 85,573 visual questions. Subsequently, removing visual questions with visual answers, left 65,849 visual questions. Examples of filtered visual questions with visual answers and multiple images are shown in Figures 6 (a), and (b), respectively. Finally, after removing examples for which the image could not be downloaded from the provided link, we got to our final dataset of 64,696 visual questions. For data format consistency, we converted all images to png format." }, { "figure_ref": [], "heading": "I.2 User Intention Taxonomy and Definitions", "publication_ref": [ "b57", "b26", "b28", "b53", "b55", "b65", "b54", "b14", "b6" ], "table_ref": [], "text": "User Intention Taxonomy. We first brainstormed 11 free-form user intention categories. To do so, we solicited help from GPT-3.5 by having it indicate via a zero-shot prompt the user intentions when given a question and context. We identified an initial 11 free-form user intention categories, which are shown in the second column of Table 6. We then [58] Six types of websites Annotators ✗ VQAv2 [27] MSCOCO Crowd workers (AMT) ✗ VizWiz-VQA [29] Captured by Blind people Blind people ✓ OKVQA [54] MSCOCO Crowd workers (AMT) ✗ DocVQA [56] UCSF Industry Documents Library Remote workers ✗ ScienceQA [66] Online K-12 learning platform Online K-12 learning platform ✗ InfographicVQA [55] Bing and Google Image Search Annotators ✗ INFOSEEK-Wikidata [15] We then finalized the taxonomy with definitions via four rounds of annotator analysis on a total of 105 visual questions. Specifically, two annotators (authors) tagged the category for each visual question. Any disagreements that arose were discussed and resolved, and the definitions associated with each taxonomy category were adjusted accordingly. We observed slight agreement accuracy14 improvement and Cohen's kappa agreement improvement after each round. Specifically, from round 1 to round 4, agreement accuracy improved from 53.3% to 87.5% and Cohen's kappa agreement improved from 45.24% to 84.69%. This process culminated in our final taxonomy and their definition, which includes the following eight categories mentioned in the main paper: advice, evidence, identification, instruction, opinion, reason, verification, and other.\nAs shown in Table 6, our final taxonomy is different from [6,10,13,31,36,40,52,73] as they focus solely on text-based question or search queries whereas our work centers on visual questions. [7] also explores user intentions for authentic visual questions for blind individuals. However, blind people's user intents of visual questions significantly differ from that of online users' visual questions. In fact, only one of the user intents identified for blind individuals (i.e., identification) overlaps with the user intents we have identified for our dataset.\nFinal user intention taxonomy and definitions. We show the final user intention taxonomy and definitions as follows:\n-Verification: These fact-checking questions aim to confirm or refute a hypothesis through affirmative answers (yes/no or multiple choice options). Since this question is based on fact, it cannot include subjective questions which should be classified as Opinion.\n-Identification: The expected answer for these questions is a named entity or object\nidentification. The answer is objective. -Reason: These questions require answers that explain causes underlying a particular action or event. Most \"why\" questions fall into this category. -Evidence-based: This category primarily includes questions that ask for the definition, features, description, or process of a concept, idea, object, event, or one of its attributes.\nIt also covers translation questions of seeking the meaning of a sentence or word. If a question can be classified as Verification, it should not be classified as Evidence-based. -Instruction: These questions typically involve \"how to do\" inquiries, seeking instructions, guidelines, or procedures for a specific action in real life. -Advice: This category includes questions where the user seeks personalized guidance on a specific topic. Advice questions differ from Instruction questions in that they expect subjective recommendations, while Instruction questions seek objective, stepby-step processes for performing a specific action. Advice questions may also involve finding resources or seeking better translations of sentences/words. Additionally, this category can include questions where users are looking for ideas or comments on how to improve an existing solution. If a question can be classified as Instruction, it should not be categorized as Advice. -Opinion: These questions aim to elicit subjective opinions on a topic of interest (e.g., \"what do you think about\" or \"is X good/bad\"). It might include religious questions. This category excludes Advice questions, where the focus is on the user asking the question. -Other: Other." }, { "figure_ref": [], "heading": "I.3 User Intention Annotations", "publication_ref": [], "table_ref": [], "text": "Hiring Crowdworkers. We hired three crowdworkers from Amazon Mechanical Turk to perform our task. For quality control, we only accepted workers located in the United States who had completed more than 500 Human Intelligence Tasks (HITs) with over a 95% approval rating. Annotation Task Design and Collection. We provided instructions, taxonomy definitions, and two examples per user intention category. The annotation task interface showed for each visual question the question, context, and answer.\nTo facilitate collecting high-quality results, we provided each crowdworker with one-on-one Zoom training. We also provided a qualifying annotation test that all workers passed to verify they understood the instructions.\nTo enable the assessment of high-quality results, we collected two user intention annotations from two independent workers (worker A and B) per VQA instance (i.e. image-question-context-answer) for 105 randomly selected VQAs, one from each of 105 topics. We then detected if their annotations matched. We found that 25 out of 105 didn't match. Thus, we hired a third worker (worker C) to break the tie by instructing that individual to select one of the two provided user intentions. We paid $40 Amazon Gift Card to workers A and B and $5 Amazon Gift Card to worker C." }, { "figure_ref": [], "heading": "Instructions for User Intention Annotation. The following are the instructions provided for workers A and B:", "publication_ref": [], "table_ref": [], "text": "-Read definitions and two examples for each category.\n-We will present to you 105 questions. Each question contains a question, context, and a reference answer. Please read them and categorize the primary user intention according to step 1.\n-If there are any non-English questions, or if the questions contain many proper nouns that are difficult to interpret, you can use a translation tool to translate them into your native language. -Fill in the user intent category for each question in the provided spreadsheet." }, { "figure_ref": [], "heading": "I.4 Potential Benefits of User Intention in VQA", "publication_ref": [ "b9", "b22", "b30", "b35", "b72" ], "table_ref": [], "text": "Though we only provide an initial exploration of user intention in the main paper, we discuss the potential usage of user intention here to encourage future extensions. Overall, understanding and identifying user intent can potentially help create human-centered VQA models for better user experience. First, it facilitates the creation of datasets with better answers tailored to user needs. Second, analysis of intention prevalence in real-world scenarios-examining alongside model performance for each type can help developers in prioritizing their efforts. Third, a model with the ability to understanding Table 6: Eight user intent taxonomy categories: one from our dataset, one for free-form intent (derived through GPT-3.5 bootstrapping and manual labeling), and six from existing research. Our taxonomy specifically addresses user intentions for visual question answering, which is in contrast to the taxonomy from [10] which focuses on Bing query search and others [6, 23,31,36,73] \n- - - Comparison - - Definition - Definition - - - Find - - - Resource - - Language - - - Language - - - - - - Temporal - - - - - - Calculation - - - - - Attribute - - - - - - List - - - - - - Weather - - - - f - Location - Explain - - - - - - - - - - - - Request research Other - - - - - -\nuser intent also can potentially provide answers more directly meeting users needs, rather than related, true responses not meeting users' needs. For instance, when asked a visual question such as \"why doesn't the bulb work,\" users may prefer practical solutions to fix the bulb rather than just reasons and explanations. Without recognizing user intentions, models might only offer reasons." }, { "figure_ref": [], "heading": "II Dataset Analysis", "publication_ref": [ "b13", "b69", "b11", "b11", "b69", "b40", "b7", "b68", "b62", "b50", "b17" ], "table_ref": [ "tab_4" ], "text": "We supplement the main paper by comparing the sources of the visual questions for eight datasets in Table 5. Only our dataset and VizWiz-VQA dataset are sourced from authentic use cases with both the images and visual questions coming from real users.\nTable 7: Details about the six benchmarked models' model configuration and training data. CC * comprises datasets from COCO [14], CC3M [70], and CC12M [12]. CC stands for Conceptual Caption [12,70]; VG stands for Visual Genome [41]; CY stands for COYO-700M [8]; L400 stands for LAION 400M [69]; SBU [63] contains 1 million images with captions. LLaVA-I stands for 158K multimodal instruction-following data in LLaVA [51]. QA * stands for 13 question-answering datasets in InstructBLIP [18]." }, { "figure_ref": [], "heading": "III Algorithm Benchmarking", "publication_ref": [ "b17", "b45", "b50", "b61", "b80", "b83", "b87" ], "table_ref": [], "text": "Architectures. Details about each of the six benchmarked models are provided in Table 7 and Table 8. Specifically, we report each model's image encoder, language encoder, adapter, and their training data [18,46,51,62,81,84,88]. Dataset sources that have data contamination are marked in red and sources suspected with data contamination are marked in blue 15 .\nModel Implementations. For GPT-4V, we used the gptv-2023-07-01-preview version.\nFor one-shot settings, we created the prompts by adjusting prompts from mPLUG's official repository and Azure's few-shot prompting examples for groundedness evaluation. The prompts we used is exemplified in Figure 7.\n\"The following is a conversation between a curious human and AI assistant. The assistant gives helpful, detailed, and polite answers to the user's questions." }, { "figure_ref": [ "fig_7" ], "heading": "## Example Task", "publication_ref": [ "b74", "b86", "b20" ], "table_ref": [], "text": "Human: <image_example> Human: {question_example} + {context_example} AI: {reference_answer}\" ## Actual Task: Human: <image> Human: {question} + {context} AI: \" Fig. 7: The prompt for one-shot setting.\nAs mentioned in the main paper, to complement five popular evaluation metrics, we also introduce a new evaluation metric based on LLaMA2 [75]. We tested four different prompts for it to assess the correctness of model-generated answers: prompting to output continuous scores from 0 to 1, discrete scores (0, 0.5, 1), following [87], or Azure groundness evaluation examples. We selected the best one from our preliminary analysis, which is shown in Figure 8.\nSystem: \"You are a helpful AI assistant. You will be presented with a REFERENCE ANSWER and a PREDICTED ANSWER. Your task is to rate the correctness of the PREDICTED ANSWER. Chose one of the following rating: 0 (Totally Wrong), 0.5 (Partially Correct), or 1 (Totally Correct). Just complete the last space of the correctness score.\" User: \"REFERENCE ANSWER: {Reference} PREDICTED ANSWER: {Prediction} Score: \" Subset Creation. As mentioned in the main paper, due to GPT-4V model requiring a fee for their usage [21], and due to the computational and financial cost of running LLaMA2 16 , we only evaluate GPT-4V and LLaMA2 on a subset of data. We created this subset by randomly selecting 20 VQAs from each of the 105 topics in the test set, except for the 17 topics containing less than 20 examples where we used all available VQAs. The resulting subset contains 1903 VQAs. Another reason for creating the subset with random sampling is due to the original dataset displaying a long-tail distribution across different topics." }, { "figure_ref": [ "fig_8", "fig_9" ], "heading": "III.1 Results on Subset.", "publication_ref": [], "table_ref": [ "tab_7", "tab_8", "tab_9", "tab_10" ], "text": "Overall Results on Subset. Results are shown in Table 9. As mentioned in the main paper, GPT-4V achieves the best performance on the subset, based on five of the six evaluation metrics. Other models have the same rankings as that shown in the main paper.\nmPLUG-Owl Model in One-shot Setting, Evaluated with LLaMA2 Metric. We explore the mPLUG-Owl model in one-shot-setting on subset, evaluated with LLaMA2 metric. Results are shown in Table 10. It strengthens our finding in the main paper that none of the one-shot examples enhance performance for mPLUG-Owl and that exemplars with matching topic tags consistently outperform randomly chosen ones.\nAnaysis With Respect to Input Types, Evaluated with LLaMA2 Metric. We then analyze the predictive power of each input of the top-performing models, mPLUG-Owl and GPT-4V on subset evaluated with LLaMA2 metric. Results are shown in Table 11. The findings from LLaMA2 metric strengthens the findings we discussed in the main paper that (1) best-performing model is with all information (Q+C+I), (2) the worst-performing model is the model lacking context (i.e., Q+I) for both GPT-4V and mPLUG-Owl models, and (3) the context is the most valuable information source for arriving at the target answer, while the predictive powers of questions and images are nearly negligible.\nAnalysis With Respect to VQA Topics. We next analyze the influence of VQA topic on the top-performing models, mPLUG-Owl and GPT-4V on the subset, with respect to the five super-categories in Table 12 and with respect to 105 topics, evaluated with LLaMA2 metric in Figure 9. Overall, the models perform best in the Business category and worst in the Life & Arts category. Note that this does not contradict our findings in the main paper, as models still perform well in Science and poorly in Culture & Recreation. We suspect that the differences between this and the main paper's findings are due to the distribution differences (different proportion of each topic in each super-topics) of the subset compared to the entire dataset. To support our hypothesize, we also calculated METEOR, and BERTscore, which are highly correlate with human judgments and observed the same trend as LLaMA2 metric on subset.\nTo complement the main paper, we also reported mPLUG-Owl and LLaVA across 105 topics with three other metrics in Figure 10. Overall, it strengthens our findings in the main paper regarding what topics are relatively easy (e.g. Mathematics, Economics, and For visualization simplicity, we show text labels only for topics with identified interesting trends. We omitted \"language\" for each language topic and omitted topics with less than 10 data points. We also shortened some topics' names, such as \"Gardening\" instead of \"Gardening & Landscaping\", and \"Fitness\" instead of \"Physical Fitness\".\nComputer Science) and hard (e.g. Pyccknn, Hinduism, Puzzling). Besides, LLaMA2 indicates that some Culture & Recreation topics (e.g., \"The Great Outdoors\") and some Life & Arts topics (e.g., \"Pets\", \"Gardening & Landscaping\", \"Physical Fitness\") are relatively hard for GPT-4V, mPLUG-Owl, and LLaVA models." }, { "figure_ref": [], "heading": "III.2 Experiments with Domain Experts", "publication_ref": [ "b79" ], "table_ref": [], "text": "Topic Selection. We chose the 10 topics based on initial manual selection and expert availability. Initially, we manually identified 20 candidate topics for human evaluation. This selection considered diversity including using three of the seven topics from [80] as well as topics on math (i.e., stats), language (i.e., Chinese), and daily life (e.g., Gardening and Fitness). The topics' frequency span from very frequent (Stats, Gardening, and Music) to moderate frequency (Economics, AI) to less frequent (Law). Subsequently, we selected ten topics based on the availability of domain experts to represent each area. We acknowledge that these 10 topics might not fully present the entire dataset, as it is a chicken-and-egg problem: we don't know the difficulty level of the 105 topics beforehand and thus have to first conduct a human evaluation on a few topics to decide which metrics are most human-aligned for quantitatively evaluating the entire dataset." }, { "figure_ref": [ "fig_10", "fig_1" ], "heading": "Domain Expert", "publication_ref": [ "b12", "b15", "b16", "b17", "b44", "b49" ], "table_ref": [ "tab_2", "tab_11", "tab_11", "tab_4", "tab_2", "tab_11", "tab_4", "tab_12", "tab_13", "tab_12", "tab_14" ], "text": "Hiring. We hired ten domain experts to represent each of the ten fields (topics). To guarantee English proficiency, as most of our visual questions are written in English, we only accepted experts located in the United States. Annotation Task Interface. We show a screenshot of the annotation instructions in Figure 11 and task interface in Figure 12. The link to the code for our web-based annotation tool is available at https://github.com/VQAonline/VQAonlineVisualization.\nData Collection Quality Control Mechanism. We provided a one-on-one training session with each domain expert to introduce the task and instructions. Afterwards, we gave each expert our contact information so that they could contact us with any questions about their tasks and receive feedback quickly. We also inspected the time each expert spent on each visual question as a proxy for assessing whether the hired domain experts were finishing the task by quickly selecting random options. The median and mean times are 1.4 minutes and 1.29 minutes respectively for assessing each model answer per visual question. We compensated each domain expert with a $75 Amazon Gift Card, resulting in an average hourly wage of $26/hour. Spearman Correlation. To supplement the main paper, we also report the Spearman correlation scores for each evaluation metric: GPT-4 (1 -correlation, 0.000 -statistical significance), ROUGEL (1, 0.000), METEOR (0.943,0.005), BERTScore (0.829, 0.042), RefCLIP-S (0.771, 0.072), and CLIP-S (0.600,0.208). Spearman correlation rankings for the models follow the order from human judgments, reinforcing the main paper's findings that reference-based metrics and human judgments are highly correlated.\nCorrect, Partially Correct, and Incorrect Examples From Expert Evaluation. We show examples of expert annotations for the six benchmarked models in Tables 13,14, 15, 16, 17, 18, spanning those that are correct (13), partially correct (Table 14 and Table 15), and incorrect (16,17,18). One mechanics's example shows where most of the models can answer correctly for a closed-ended visual question related to recognition (Table 13).\nTwo economics examples show where the models partially fail for analyzing an infographic and requiring specific domain knowledge (Table 14 and Table 15). These highlight that models labeled \"partially correct\" can occur because of correctly answered closed-ended questions with incorrect explanations or insufficient explanations matching the reference answer. For example, GPT-4V, mPLUG-OWl, MiniGPT-4, and LLaVA answer \"yes\" but don't match any key points of the reason in the reference answer and have factual errors in their given reasons 17 . InstructBLIP answered \"yes\" without providing any reason, but simply rephrased the context. For each example, key points in the reference answer were either unsatisfactorily conveyed in the model's answer or were altogether absent. We also show examples where models gave incorrect answers from a failure to handle the conflict in the question, context, and image. These are shown in Tables 16,17, and 18. In Table 16 and 17, while the context and question provided incorrect information by indicating the plant has mold and asking how to remove the mold, the ground-truth answer corrects the questioner saying it's not mold but moss/algae so there is no need to remove it. All the models trusted the natural language context and question more than the image and then thus were misled to answer the question (partially) incorrectly 18 by offering methods to remove mold 19 . The failure to handle conflict in question, context, and image is also common in other topics (e.g., Chinese Language), as shown in Table 18. While the context provided incorrect information by indicating the characters are 心兴, the ground-truth answer corrects the questioner saying it's indeed 心头. All the models trusted the context more than the image and then thus were misled to answer the question incorrectly. This type of failure, also known as the hallucination issue, is also discussed in [45,50], where VLMs may generate descriptions or answers conflicting with the given image to align with the natural language instructions or questions, influenced by the strong language priors of large language models. We will release our dataset to facilitate further exploration of the hallucination issue in the authentic use cases. " }, { "figure_ref": [ "fig_1" ], "heading": "III.3 Qualitative Examples from Different Models", "publication_ref": [], "table_ref": [ "tab_7", "tab_7" ], "text": "We additionally provide a qualitative example from different models in Table 19, where the visual questions require significant reasoning. We think the visual question in Table 19 is interesting as algorithms like AlphaGo or MuZero can surpass humans in chess games, but the vision and language models fail in this domain. For example, MiniGPT4, InstructBlip, and BLIP2 didn't even understand the visual question. LLaVA directly copied the example provided in the context. mPLUG-Owl also directly copied the example provided in the context except omitted two moves, leading to an illegal move at round 8 (half move #15 Rh4). In contrast, GPT-4V gave an answer that on the surfacelevel seemed to make sense. Yet, GPT-4V violated the requirement of not capturing any pawn in the third round (half move #5 Bxf7+) as well as made an illegal move in round 11th (halfmove #21 gxf6). Of note, we verified all models' answers with https://www.apronus.com/chess/pgnviewer/.\nFig. 12: Screen shot of the annotation task interface. Experts recorded their results in spreadsheets.\nA visual question where most of the models can answer correctly:\nCategory Mechanics Question Tire sidewall damage, replace immediately? Context Somehow I was lucky enough to hit something when going downhill. And now my tire looks like in the picture. So should I still drive with this tire or get it replace immediately? And to replace it, is it safe to drive to the tire shop, which is a few miles away? Reference Answer\nSidewall damage is a serious issue. Sidewall is the structural part of the tire, and damage to it can result in dangerous consequences. Replace your tire immediately. If you know how to put the spare tire on (and there is a spare tire in your car), please do it now, and drive only to the tire shop. Spare tires are not designed for extended drives, so don't start driving around in the spare tire. If you have driven more than a few miles already with the tire before looking at the damage, you may have enough data to indicate that it may survive to the tire shop. I would however prefer the spare tire approach in your case. If there's no spare tire, then that would be a difficult decision: to have your car towed with the broken tire off the ground, or to drive the car. You know more about the details in your case than I do, because you didn't indicate how many miles exactly you drove with the tire after the incident." }, { "figure_ref": [], "heading": "GPT-4V (Correct)", "publication_ref": [], "table_ref": [], "text": "This looks like a significant sidewall damage and driving on this tire can be dangerous. It is recommended to replace the tire immediately. If you have a spare tire, it would be safer to use the spare and drive to the tire shop to get a new tire. If you don't have a spare, drive slowly and cautiously to the tire shop, and avoid highways and high speeds.\nmPLUG-Owl (Partially correct) Yes, it is advisable to replace the tire immediately if it has suffered significant damage. The sidewall of the tire is crucial for maintaining its structural integrity while driving. Damage to the sidewall can lead to air loss, which can increase the risk of a blowout, especially on high speeds or when making sharp turns.\nIn such cases, driving with a damaged tire can be extremely dangerous and could result in a catastrophic accident. It is safer to replace the tire as soon as possible to ensure your safety on the road. To drive to the tire shop, it is usually safe to do so as long as the tire is not leaking air. However, it's important to check for any visible signs of air leakage while driving and to ensure that the tire pressure is maintained at the recommended level. If you notice any signs of air leakage, pull over to a safe location and stop as soon as possible to check for any further issues." }, { "figure_ref": [], "heading": "LLaVA (Correct)", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "Given the severe sidewall damage to the tire, it is recommended to replace it immediately. While it is technically possible to drive with the damaged tire, there are risks associated with doing so. The damage could worsen, leading to a potential blowout, which could result in loss of control, accidents, or further tire damage. Additionally, driving with a damaged tire can impact the vehicle's handling and braking capabilities. It is safer to replace the damaged tire with a new one, ensuring optimal road safety and vehicle performance. As for driving to the tire shop, it is advisable to do so if the tire shop is nearby and if there are no other more severe damages to the vehicle that require immediate attention. If in doubt, consult an expert or seek assistance from roadside services if available.\nMiniGPT-4 (Correct) It is not recommended to drive with a damaged tire, as it could result in a blowout or loss of control of your vehicle. If the tire is severely damaged, it is important to replace it immediately, as it is not safe to continue driving on it. If the tire is only slightly damaged, it may still be safe to drive on it for a short distance, but it is still recommended to have it inspected and potentially replaced. It is safe to drive to the tire shop a few miles away, but it would be wise to make sure that the tire shop is open and able to serve you before making the trip. If you cannot replace the tire immediately, you should drive as little as possible and have the tire inspected as soon as possible.\nInstructBLIP (Correct) replace immediately BLIP2 (Incorrect) yes, it is safe to drive to the tire shop Table 13: An example where most of the models can answer correctly. Models' answers were verified and the errors were highlighted in red by a Mechanics Ph.D. You can definitely do so! There are two versions of the income effect. The one that you are referring to is called the Equivalent variation. The two ways of looking at the problem were introduced by Sir John Richard Hicks. To see the impact of a price change in monetary terms, we ask how much money should have been taken before the price change to leave a consumer at the same utility level he attains after the price change. This is the same thing as you had asked: can we reverse the two procedures and measure the income effect first by drawing a line parallel to the original budget line and tangent to the new indifference line? I hope this has helped but if you wish to explore further, see the 'Consumer Surplus' chapter in Hal Varian's 'Intermediate Microeconomics'.\nGPT-4V (Partially correct) Yes, you can reverse the two procedures and measure the income effect first, but it is less intuitive and less accurate compared to measuring the substitution effect first. The reason for this is because the income effect is the difference between the old and new optimal bundles (A and D) after removing the substitution effect. Since the substitution effect is a result of changes in relative prices, in order to accurately measure the income effect, you would first need to find the new optimal bundle that would have been chosen if the relative prices had remained the same. In practice, this is difficult to do because you would have to make assumptions about how the individual's preferences would have changed, and you would need to find the new optimal bundle on the new indifference curve. It would be much easier and more accurate to simply measure the substitution effect first, and then subtract that from the total effect to find the income effect. Here are some of my seedlings, sown a little bit less than a month ago. As you can see, there's mold starting to form. I am using the baggie method whereby the pot with the seeds is enclosed in a Ziploc bag, to maintain high humidity and protect from contamination. What would be an easy way to get rid of the mold without hurting the seedlings? Someone suggested to use hydrogen peroxide, but I'm not sure it's safe for the seedlings. Ideas? Reference Answer That isn't mold, it's moss/algae. You can tell (by the green) that it's growing from energy obtained by photosynthesis, so it's not a big threat competition-wise. It's a sign that the mix is constantly moist on the surface, and also grows faster in high humidity. It will not damage the cactus seedlings, but here's a tip. When cacti germinate, they grow very slowly/remain the same size for a long time. That's because they're developing a strong root system. A sign they are established is that they will begin to grow at a faster rate. This means the root system is growing deep, and you can allow the top 1/2 of mix to dry between waterings. This will keep moss from establishing. For now, you can use a toothpick and stir up the top layer if you want, to slow the moss, as described here: Is there anything wrong with allowing moss to grow around herbs? Or you can use a pair of tweezers and carefully remove the moss. In the long run, it really doesn't matter. The moss won't be a problem after the seedlings mature a little, and you don't have to water so frequently. And by the way, those seedlings look great! I hope you have success with them.\nGPT-4V (Partially correct) To get rid of mold without hurting your cactus seedlings, try the following methods: 1. Cinnamon: Cinnamon has antifungal properties and can be used as a natural fungicide. Simply sprinkle some ground cinnamon over the affected areas, and it should help control the mold growth. 2. Baking soda: Baking soda is also effective at controlling mold growth. Mix one tablespoon of baking soda with a quart of water, and spray the solution over the affected areas. 3. Vinegar: Vinegar also has antifungal properties and can be used as a natural fungicide. Mix one part vinegar with three parts water, and spray over affected areas. Some other tips to help prevent mold growth in the future include: -Limiting the amount of moisture in the soil -Increasing air circulation around the seedlings -Avoiding overwatering -Keeping soil temperature between 70-80 degrees Fahrenheit -Keeping relative humidity between 40-60%. Don't use hydrogen peroxide as it can be harmful to the seedlings. mPLUG-Owl (Incorrect) It is essential to take steps to safely eradicate any mold growth on your cactus seedlings without harming them. One option is to use hydrogen peroxide. However, it is important to be cautious when using it on seedlings. Hydrogen peroxide is a strong oxidizing agent that can cause burns or damage to the cactus seedlings if used improperly. To safely use hydrogen peroxide to treat mold on the seedlings, follow these steps: 1. Use a dilute solution of hydrogen peroxide. The best dilution ratio is around 1 tablespoon of 3% hydrogen peroxide per gallon of water. 2. Pour the hydrogen peroxide solution into a shallow dish or container. 3. Soak the affected cactus seedlings in the solution for a few minutes. 4. Remove the seedlings from the solution and place them back in the pot with the Ziploc bag. 5. Seal the Ziploc bag, making sure to leave a small opening at the top to allow ventilation. By following these steps, you can ensure that the hydrogen peroxide solution will not come in direct contact with the cactus seedlings and will effectively eradicate any mold without harming the plants. It is essential to monitor the seedlings carefully and remove them from the hydrogen peroxide solution if any signs of injury or damage appear. If you are not confident in using hydrogen peroxide or have any concerns about its effectiveness, you may consider using a more gentle method, such as washing the seedlings with a gentle soap and water solution and allowing it to dry thoroughly. The cactus seedlings in the image were sown a little bit less than a month ago. As you can see, there's mold starting to form. I am using the baggie method whereby the pot with the seeds is enclosed in a Ziploc bag, to maintain high humidity and protect against contamination. As you can see, there's mold starting to form. I am using the baggie method whereby the pot with the seeds is enclosed in a Ziploc bag, to maintain high humidity and protect against contamination. What would be an easy way to get rid of the mold without hurting the seedlings? Someone suggested to use hydrogen peroxide, but I'm not sure it's safe for the seedlings. Ideas? BLIP2 (Incorrect) I'm using the baggie method whereby the pot with the seeds is enclosed in a Ziploc bag, to maintain high humidity and protect against contamination I am using the baggie method whereby the pot with the seeds is enclosed in a Ziploc bag, to maintain high humidity and protect against contamination I am using the baggie method whereby the pot with the seeds is enclosed in a Ziploc bag, to maintain high humidity and protect against contamination I am using the baggie method whereby the pot with the seeds is enclosed in a Ziploc bag, to maintain high humidity and protect against contamination I am using the baggie method whereby the pot with the seeds is enclosed in a Ziploc bag, to maintain high humidity and protect against contamination I am using the baggie method whereby the pot with the seeds is enclosed in a Ziploc bag, to maintain high humidity and protect against contamination I am using the baggie method whereby the pot with the seeds is enclosed in a Ziploc bag, to maintain high humidity and protect against contamination I am using the baggie method whereby the pot with the seeds is enclosed in a Ziploc bag, to maintain high humidity and protect against contamination I am using the baggie method whereby the pot with the seeds is enclosed in a Ziploc bag, to maintain high humidity and protect against contamination I am using the baggie method whereby the pot with the seeds is enclosed in a Ziploc bag, to maintain high humidity and protect against contamination I am using the baggie method whereby the pot with the seeds is enclosed in a Ziploc bag, to maintain high humidity and protect against contamination. That character is not「兴」(to rise > excited, happy), it is「头」(head). 「心头」(literally heart + head) means mind -the picture is saying that you use your mind to think/reason." }, { "figure_ref": [], "heading": "GPT-4V (Incorrect)", "publication_ref": [], "table_ref": [], "text": "心兴 is a colloquial expression in Chinese. It roughly means \"to have a happy mood or personally interested in something\". It's often used in the context of \"心兴来了\" (to suddenly become interested or happy) or \"心兴去了\" (to suddenly lose interest or become unhappy). In the picture, the character is saying \"我心兴来了\" which means \"I am suddenly interested\" or \"I am suddenly in a good mood\". As for your question about 想, 想 does mean \"to think\", but it can also mean \"to want\" or \"to wish\". Chinese is a contextual language, so the meaning of a word or phrase can change depending on the context it is used in.\nmPLUG-Owl (Incorrect) Yes, you are correct! The meaning of \"心兴\" is to be enthusiastic or have enthusiasm. It is often used to describe someone who is excited or passionate about something, such as a new project or idea. The meaning of the phrase \"心兴\" in the picture is to have enthusiasm for learning. It is a part of a Chinese character that is often used in children's books and learning materials, such as the colorful picture you have provided. The image shows a young girl holding a book with this character on it, indicating that she is interested in learning and has an enthusiasm for reading and education." }, { "figure_ref": [], "heading": "LLaVA (Incorrect)", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "In this context, \"心兴\" likely means \"thinking\" or \"intelligence\" in Chinese. The image, which includes pictures of children with various expressions and a book with Chinese characters, suggests that the scene involves a learning activity or a discussion about the concepts covered in the book. The presence of \"心 兴\" in the picture implies that it is a theme or an important idea related to the content being presented.\nMiniGPT-4 (Incorrect) \"心兴\" (xīn xìng) in the picture is the Chinese word for \"heartfelt\" or \"heartfelt\" (a feeling of contentment or happiness). The character for \"心\" (xīn) means \"heart\" or \"mind,\" while the character for \"兴\" (xìng) means \"to feel happy or content.\" Therefore, \"心兴\" (xīn xìng) can be translated as \"feeling contented or happy.\" The character 想(cāng) in Chinese means to think or to ponder, so it's not the same as \"心兴\" (xīn xìng) .\nInstructBLIP (Incorrect) yes BLIP2 (Incorrect) \"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\" LLaVA (Incorrect) I cannot play a game of chess directly with you, but I can provide you with a solution to the problem you have presented in the image. The image shows a chess board with a total of 16 pawns and two kings, with a total of 18 pieces on the board. In order to capture all the pieces except for the two kings and the 16 pawns, the following sequence of moves can be followed: This sequence of moves will capture all the pieces except for the two kings and the 16 pawns. The solution has been found using the minimax algorithm, which is an optimal algorithm for solving two-player games such as chess.\nMiniGPT-4 (Incorrect) As an AI language model, I can certainly analyze the position you provided and try to find a continuation that may lead to a win for White, but it's important to note that I am not a chess engine and my analysis will not be as deep or as accurate as a specialized chess engine like Stockfish. Given the position you provided, I can see that Black's king is exposed and vulnerable to a quick checkmate. One move that may be try to force a win for White is 21. Rh8+. This move aims to put the king in check and force black king to move, and then exploit the exposed king with a series of moves like Rh6, Rh5, and Rh4, putting pressure on the king and threatening a checkmate. It is also possible to try to exploit the open position with a pawn storm with moves like e4 and e5, to put pressure on the black king and try to win the game. It's important to note that this is just one possible move and that a skilled chess player player could find other other other moves to try to to try to win the the game. Also, Stockfish or other chess engines will likely provide a more detailed analysis of the position and a better best continuation. Table 19: A visual question requires a significant reasoning process." }, { "figure_ref": [], "heading": "Model Configuration", "publication_ref": [ "b45", "b87", "b50", "b83", "b13", "b17", "b61" ], "table_ref": [], "text": "Img-text Data Visual Instruction Data Models Img Encoder Lang Encoder Adapter Source Size Source Size BLIP2 [46] ViT-g/14 FlanT5XL Q-Former CC * , VG, SBU, L400 129M --MiniGPT-4 [88] BLIP2-VE Vicuna 13B FC layer CC, SBU, L400 5M CC+ChatGPT 3.5K LLaVA [51] ViT-L/14 LLaMA 13B FC layer CC3M 595K LLaVA-I 158K mPLUG-Owl [84] ViT-L/ 14 LLaMA 7B LoRA CC * , CY, L400 204M LLaVA-I 158K InstructBLIP [18] ViT-g/14 FlanT5XL Q-Former --QA * 16M GPT-4V [62] Unknown Unknown -Unknown ---Table 8: Details about the six benchmarked models' language encoders and image encoders. L400, L2B, and C4 are all derived from Common Crawl. (L400=LAION-400; L2B=LAION-2B; C4=Colossal Clean Crawled Corpus)." }, { "figure_ref": [], "heading": "Encoder Source", "publication_ref": [ "b63", "b68", "b21", "b67", "b15", "b64", "b64", "b1", "b8", "b18", "b46", "b77", "b82", "b66", "b76", "b0", "b10", "b16", "b24", "b38", "b41", "b48", "b60", "b75", "b73", "b64", "b23", "b43" ], "table_ref": [], "text": "ViT-L-14 [64] L400 [69] ViT-g/14 [22] L2B [68] FlanT5XL [16] Finetuned based on T5 [65] while T5 was trained with C4 [65]. Finetuned with 69 Multi-task finetuning with instructions (Muffin) datasets [2,9,19,47,78,83], 55 datasets from T0-SF [67], and 375 natural instructions v2 (NIV2) datasets [77] and 9 Chain-of-Thought (CoT) datasets derived from [1,11,17,25,39,42,49,61,76] LLaMA [74] CommonCrawl, C4 [65], Github, Wikipedia, Gutenberg and Books3 [24], ArXiv [44], and QAs from StackExchange's 28 largest websites." }, { "figure_ref": [], "heading": "Vicuna", "publication_ref": [ "b73", "b64", "b23", "b43" ], "table_ref": [], "text": "Finetuned based on LLaMA [74] with user-shared conversations, so also with CommonCrawl, C4 [65], Github, Wikipedia, Gutenberg and Books3 [24], ArXiv [44], and QAs from StackExchange's 28 largest websites.\nA visual question needing specific domain knowledge:\nmPLUG-Owl (Partially correct) Yes, you can measure the income effect before knowing the substitution effect by drawing a line parallel to the original budget line and tangent to a new indifference curve. The income effect is the effect on consumption of a given good when the income of a consumer increases or decreases. To measure the income effect, we need to find the new indifference curve that represents consumption choices of consumers after their income has changed. The original budget line represents consumption choices of consumers before the change in income, while the new indifference curve represents consumption choices after the change in income. To find the new indifference curve, we can use the budget line equation: y = C(x, y) = w(y -t(x, y)) where x and y represent consumption of shelter and composite goods, respectively, and t(x, y) represents the marginal utility of composite goods. C(x, y) is the budget line, and w represents the consumer's budget. The budget line is the line of points where the total utility is equal to a given consumer's budget. To measure the income effect, we need to find the point of intersection between the original budget line and the new indifference curve, which represents consumption choices after the change in income. The new indifference curve can be found by solving the budget equation using the new income level and the new marginal utility of composite goods. To find the point of intersection between the original budget line and the new indifference curve, we need to solve the budget equation using the new income level and the new marginal utility of composite goods: w(y -t(x, y)) = y -t(x, y) Substituting the new marginal utility of composite goods into the budget equation, we get: w(y -t(x, y)) = y -t(x, y) = y -t((x1, y1), (x2, y2)) where (x1, y1) and (x2, y2) are the consumption points on the new indifference curve. We can then solve for the point of intersection between the original budget line and the new indifference curve by substituting x1 and y1 into the budget equation: w(y -t(x1, y1)) = yt(x1, y1) = y.\nLLaVA (Partially correct) Yes, it is possible to measure the income effect first by drawing a line parallel to the original budget line and tangent to the new indifference curve. This can be done by following the steps you described in your question. The income effect represents the change in consumption due to a change in income, and it can be calculated by comparing the consumer's initial position (point A) to the new optimal consumption point (point D) after the change in the price of shelter. The substitution effect, on the other hand, is the effect of changing the relative price of shelter and composite goods on the consumer's choice of consumption bundle. It can be derived from the difference between the tangent line parallel to the new budget line and the indifference curve, and the income effect.\nMiniGPT-4(Partially correct) Yes, you can certainly measure the income effect first by drawing a line parallel to the original budget line and tangent to the new indifference curve. This approach is commonly used in economics to measure the income effect of a change in the price of a good or service. The income effect is the change in the quantity demanded of a good or service due to a change in income. When income increases, the income effect causes the consumer to shift the demand curve to the right, leading to an increase in the quantity demanded of the good or service. Similarly, when income decreases, the income effect causes the demand curve to shift left, resulting in a decrease in the quantity demanded. To measure the income effect using a substitution effect approach, you would draw a line parallel to the original budget line (B1) that is tangent to the new indifference curve (I0') that represents the consumer's maximum utility point after the change in the price of the good or service. The point of tangency is denoted by the letter 'P' on the new budget constraint. The ratio of the areas of the triangles PBC and PC defines the magnitude of the income effect, which can be measured in terms of the quantity demanded of the good or service. Note that this approach is not the most intuitive way to measure income effect. It's more common to draw a new budget constraint to measure the income effect. The new budget constraint after InstructBLIP (Partially correct) Yes, it is possible to reverse the two procedures and measure the income effect first by drawing a line parallel to the original budget line and tangent to the new indifference line." }, { "figure_ref": [], "heading": "BLIP2 (Incorrect)", "publication_ref": [], "table_ref": [], "text": "No, you can't measure the income effect first by drawing a line parallel to the original budget line and tangent to the new indifference line. " } ]
Visual Question Answering (VQA) entails answering questions about images. We introduce the first VQA dataset in which all contents originate from an authentic use case. Sourced from online question answering community forums, we call it VQAonline. We characterize this dataset and how it relates to eight mainstream VQA datasets. Observing that answers in our dataset tend to be much longer (i.e., a mean of 173 words) and so incompatible with standard VQA evaluation metrics, we instead utilize popular metrics for longer text evaluation for evaluating six state-of-the-art VQA models on VQAonline and report where they struggle most. Finally, we analyze which evaluation metrics align best with human judgments. To facilitate future extensions, we publicly-share the dataset at: https://vqaonline.github.io/.
Fully Authentic Visual Question Answering Dataset from Online Communities
[ { "figure_caption": "Fig. 1 :1Fig.1: VQA examples from our VQAonline dataset and three mainstream VQA datasets[27,66,71]. VQAonline is the first VQA dataset to originate from an authentic use case end to end, including with authentic context, answers, and topics/categories labels. It also is the first VQA dataset from an online question answering platform. A critical distinction of VQAonline, which necessitates a new evaluation methodology, is its lengthy answers. (Q=question, C=context, A=answer).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 :2Fig. 2: Number of visual questions per topic (in log scale). The colors, as defined in the legend, indicate the super-category for each topic.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Splits. VQAonline supports few-shot learning by containing training, validation, and test splits of 665, 285, and 63,746 examples respectively. We created training and validation splits by randomly selecting from each of the 105 topics in our dataset 7 examples for training and 3 examples for validation for topics with at least 20 examples.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 3 :3Fig. 3: Examples of three visual questions with three different user intents.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig. 4: Performance of the two top-performing VQA models, mPLUG-Owl and LLaVA, for each of 105 topics with their five super-categories represented in 5 different colors. Results are shown with respect to three evaluation metrics: (a) METEOR, (b) BERTscore, (c) RefCLIP.For visualization simplicity, we show text labels only for topics with interesting identified trends (We omitted \"language\" for each language topic, such as \"Esperanto\" instead of \"Esperanto Language\", and we omitted topics with less than 10 data points, such as Esperanto and Community Building.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 :5Fig. 5: Results from the top-performing model, mPLUG-OWl, on three examples in our dataset.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 :6Fig. 6: (a) An example of a visual question with a \"visual answer\". (b) An example of a visual question with multiple images. The answer is omitted to save space.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 8 :8Fig. 8: The prompt for LLaMA2 metric.", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 9 :9Fig. 9: Performance of mPLUG-Owl and GPT-4V on the subset of VQAonline, for each of 105 topics with their five super-categories represented in 5 different colors. Results are shown with respect to LLaMA2 metric.", "figure_data": "", "figure_id": "fig_8", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Fig. 10 :10Fig. 10: Performance of mPLUG-Owl and LLaVA on the VQAonline, for each of 105 topics with their five super-categories represented in 5 different colors. Results are shown with respect to three evaluation metrics: (a) ROUGEL, (b) LLaMA2, and (c) CLIP-S. Note that we only report LLaMA2 metric on the subset.For visualization simplicity, we show text labels only for topics with identified interesting trends. We omitted \"language\" for each language topic and omitted topics with less than 10 data points. We also shortened some topics' names, such as \"Gardening\" instead of \"Gardening & Landscaping\", and \"Fitness\" instead of \"Physical Fitness\".", "figure_data": "", "figure_id": "fig_9", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Fig. 11 :11Fig. 11: Instructions for our domain expert annotation task.", "figure_data": "", "figure_id": "fig_10", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Avisual question needing specific domain knowledge: Category Economics Question Measuring income effect before knowing substitution effect Context Suppose on the x axis we have shelter and on the y axis we have composite goods. Now, if the price of shelter increases, the optimal bundle changes from point A to point D. Standard textbook tells me to draw a line parallel to the new budget line B1, which is tangent to the indifference curve I0. In this way we can get the substitution effect. The income effect follows. My question is, can we reverse the two procedures and measure the income effect first by drawing a line parallel to the original budget line and tangent to the new indifference line? Thanks! Reference Answer", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Avisual question when the question and the context are misleading while the image contains true information: : Category Gardening & Landscaping Question How do I safely eradicate mold without hurting cactus seedlings? Context", "figure_data": "", "figure_id": "fig_12", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "InstructBLIP (Incorrect) Chess BLIP2 (Incorrect) bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 bxa3 b .", "figure_data": "", "figure_id": "fig_13", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Performance of VLMs on our VQAonline dataset with respect to five metrics both for the zero-shot setting as well as for the top-performing zero-shot model in a one-shot setting. As shown, mPLUG-Owl is the best-performing model in the zero-shot setting and has a small drop in performance regardless of which of four types of in-context exemplars that it receives. All metrics range from 0 to 1 with larger values signifying better performance.", "figure_data": "ModelsROUGE-L METEOR BERTscore CLIP-S RefCLIP-SmPLUG-Owl [84]0.160.120.760.720.75LLaVA [51]0.150.080.750.730.75MiniGPT-4 [88]0.150.090.750.700.74InstructBLIP [18]0.090.050.670.680.71BLIP2 [46]0.090.040.690.660.71mPLUG-1shot-Random-noImg0.1380.1060.7500.715 0.747mPLUG-1shot-MatchedTopic-noImg 0.1390.1080.7560.716 0.749mPLUG-1shot-Random0.1350.1040.7450.717 0.748mPLUG-1shot-MatchedTopic0.1370.1050.7520.717 0.748", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Fine-grained analysis of the top-performing VQA model, mPLUG-Owl, when fed different input types (Q+C+I, C+I, Q+C, Q+I) and with respect to each of the five VQA supercategory types covered in VQAonline.", "figure_data": "ModelsROUGE-L METEOR BERTscore CLIP-S RefCLIP-SmPLUG(Q+C+I)0.160.120.760.720.75mPLUG(C+I)0.140.120.760.720.75mPLUG(Q+C)0.140.110.760.710.75mPLUG(Q+I)0.130.070.740.730.73Science0.150.140.780.730.77Life&Arts0.150.130.760.720.74Culture&Recreation0.140.110.750.720.74Business0.140.130.760.720.75Professional0.150.130.760.730.75", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Correlation between human judgment and six evaluation metrics for 200 VQAs.", "figure_data": "LLama2 METEOR BERTscore RefCLIP-S ROUGEL CLIP-SCorrelation0.975 0.9710.9000.8920.8740.662p-value0.001 0.0010.0150.0170.0230.152", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Comparison of visual questions from eight existing VQA datasets and our new VQAonline dataset regarding image and question sources. Like OVEN[34], INFOSEEK sources images from nine image classification and retrieval datasets.", "figure_data": "VQA DatasetWhich Images?Who Asked?From User?Our datasetStackExchange UsersStackExchange Users✓Context-VQA", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "that are dedicated to community question answering.", "figure_data": "Oursfree-form intent Ignatova (2009) [36] Harper (2010) [31] Toba (2014) [73] Cambazoglu(2021) [10] Bolotova (2022) [6] Fu (2016) [23]InstructionHowProceduralPrescriptiveProceureProcessInstructionInstructionEvidence-based Comprehend fact General Info need FactualFactoidDescriptionEvidence-basedFactualReasonWhyCausal-ReasonReasonReason-VerificationValidateVerification-Yes/NoVerificationDebate-Opinion-DisjunctiveDisapproval/Quality OpinionOpinion-OpinionIdentification RecognizeConcept completion Identification-Entity-Identifying resourcesAdviceAdvise-Advice-AdviceExperienceRecommend/Solution-ProveQuantification--Quantity--CompareComparison", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Performance of VLMs on the subset with 1,903 random samples from VQAonline dataset with respect to six (ROUGE-L, METEOR, BERTscore, CLIP-S, RefCLIP-S, and LLaMA2) for the zero-shot setting. As shown, GPT-4V is the best-performing model in the zero-shot setting.", "figure_data": "ModelsROUGE-L METEOR BERTscore CLIP-S RefCLIP-S LLaMA2GPT-4V [62]0.160.120.760.700.750.70mPLUG-Owl [84]0.140.100.750.710.740.62LLaVA [51]0.140.080.750.720.740.62MiniGPT-4 [88]0.130.080.740.690.730.59InstructBLIP [18]0.090.060.690.680.710.59BLIP2 [46]0.070.040.690.650.700.53", "figure_id": "tab_7", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Performance of mPLUG-Owl on the subset of VQAonline dataset with respect to the LLaMA2 metric with four different one shot settings.", "figure_data": "ModelsLLaMA2mPLUG-1shot-Random-noImg0.609mPLUG-1shot-MatchedTopic-noImg 0.617mPLUG-1shot-Random0.608mPLUG-1shot-MatchedTopic0.617", "figure_id": "tab_8", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Fine-grained analysis of the top-performing VQA models on the subset of the VQAonline dataset, GPT-4V and mPLUG-Owl, when fed different input types (Q+C+I, C+I, Q+C, Q+I), evaluated with LLaMA2 metric.", "figure_data": "ModelsLLaMA2mPLUG (Q+C+I)0.62mPLUG (C+I)0.61mPLUG (Q+C)0.60mPLUG (Q+I)0.58GPT-4V (Q+C+I)0.70GPT-4V (C+I)0.69GPT-4V (Q+C)0.69GPT-4V (Q+I)0.62", "figure_id": "tab_9", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "Fine-grained analysis of the top-performing VQA models on the subset of the VQAonline dataset, GPT-4V and mPLUG-Owl, with respect to each of the five VQA super-category types, evaluated with LLaMA2 metric.", "figure_data": "ModelsLLaMA2mPLUG (Science)0.69mPLUG (Life&Arts)0.59mPLUG (Culture&Recreation)0.60mPLUG (Business)0.72mPLUG (Professional)0.66GPT-4V (Science)0.75GPT-4V (Life&Arts)0.67GPT-4V (Culture&Recreation)0.67GPT-4V (Business)0.76GPT-4V (Professional)0.74", "figure_id": "tab_10", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "A visual question that requires specific domain knowledge in Economics and infographic understanding. Models' answers are from GPT-4V. Errors are marked in red by an economics PhD student for reader convenience.", "figure_data": "", "figure_id": "tab_11", "figure_label": "14", "figure_type": "table" }, { "figure_caption": "A visual question when the question and context are misleading while the image contains true information. Model answers are from GPT-4V and mPLUG-Owl, all misled by the question and context. Models' answers verified by a Ph.D. in Botany major.A visual question when the question and context are misleading while the image contains true information:LLaVA (Partially correct) To safely eradicate mold without hurting the cactus seedlings, you can follow these steps: 1. Identify the mold: Look closely at the mold's appearance and determine if it's harmful to cacti. Some molds can be beneficial, such as those that create the beautiful Cordyceps fungus, while others can be detrimental to the plant's health. 2. Remove the mold: If the mold is detrimental, gently pick it off using a clean, dry cotton swab or a soft brush. Avoid using water or wetting the cotton swab, as this could spread the mold spores and affect the seedlings. 3. Improve air circulation and light: Ensure that your seedlings are receiving adequate light and air circulation. Mold thrives in dark, damp environments, so providing proper light and ventilation can help prevent mold growth. 4. Keep the seedlings well-spaced: Overcrowding can lead to mold, as it allows insufficient air circulation. Keep the seedlings well-spaced and avoid overwatering to maintain a suitable environment for healthy growth. 5. Use hydrogen peroxide cautiously: If you must use hydrogen peroxide, dilute it to a low concentration (1-3%), and apply it only to the affected area, avoiding the seedlings. Remember that hydrogen peroxide can be harmful to some plants, so use it with caution and dilute it properly to minimize risk. Always prioritize the health and safety of your seedlings, and avoid using harsh chemicals or treatments that could harm them. Focus on maintaining a clean and well-lit environment to encourage healthy growth and prevent mold.MiniGPT-4 (Incorrect) It's important to take mold in your cactus seedlings seriously and take measures to prevent its spread. While hydrogen peroxide (H2O2) is known for its antimicrobial properties, it can be harmful to plants, especially at high concentrations. I would not recommend using H2O2 to get rid of mold in your seedlings. Instead, you can use the following methods: 1. Water and sunlight: Water the affected area thoroughly and place it in direct sunlight. The sun's UV rays can help to kill the mold spores. 2. Vinegar: Mix a solution of equal parts water and white vinegar in a spray bottle, and spray the mold off the seedlings. Vinegar is a natural fungicide. 3. Bleach: Mix a solution of 1 part bleach and 9 parts water and spray it onto the seedlings. Be careful not to overdose the bleach, as it can be harmful to plants and the environment. Remember, after treating the mold, to thoroughly clean and disinfect the affected area and surrounding areas to prevent the mold from re-appearing. Also keep the humidity and temperature of your grow area in check, to prevent mold from appearing again. Please let me know if you InstructBLIP (Incorrect)", "figure_data": "", "figure_id": "tab_12", "figure_label": "16", "figure_type": "table" }, { "figure_caption": "A visual question when the question and context are misleading while the image contains true information. Model answers are from LLaVA, MiniGPT-4, InstructBLIP, and BLIP2, all misled by question and context. Models' answers verified by a Ph.D. in Botany major.", "figure_data": "", "figure_id": "tab_13", "figure_label": "17", "figure_type": "table" }, { "figure_caption": "A visual question related to the Chinese Language and requires text recognition skills. Models' answers verified by a Chinese native speaker. the minimum number of moves, a legal chess game where all pieces are captured except the sixteen pawns and the two kings. Example of final configuration: Which is done in 21 moves: 1. e4 Nc6 2. Bb5 e5 3. Bxc6 dxc6 4. Nf3 Bg4 5. h4 Bxf3 6. Rh3 Bxd1 7. Kxd1 f5 8. Rg3 Nf6 9. Rg4 fxg4 10. d4 Ba3 11. Bg5 Qd5 12. Bxf6 Qb3 13. bxa3 Rd8 14. Bxd8 Kf7 15. axb3 Rxd8 16. Nc3 Rd5 17. exd5 Kf6 18. Nb5 cxb5 19. Kd2 Kf5 20. Re1 Kf4 21. Re4+ Kxe4 Can you beat 21 moves? Reference Answer 13 moves: 1. e3 e6 2. Na3 Bxa3 3. bxa3 Na6 4. Bxa6 bxa6 5. Rb1 Qf6 6. Rb8 Qa1 7. Rxc8+ Ke7 8. Ne2 Qxc1 9. O-O Qxd1 10. Rxg8 Qxe2 11. Rxh8 Qxf1+ 12. Kxf1 Re8 13. Rxe8+ Kxe8 12 1 2 : 1. e3 e6 2. Na3 Bxa3 3. bxa3 Na6 4. Bxa6 bxa6 5. Qf3 Qf6 6. Qxa8 Qxa1 7. Qxc8+ Ke7 8. Qxg8 Qxc1+ 9. Ke2 Qxg1 10. Qxh8 Qxh1 11. Qe8+ Kxe8 12. a4 Qe1+ 13. Kxe1 Try it online! GPT-4V (Incorrect) Yes, it can be done in 19 moves: 1. e4 Nc6 2. Bc4 e5 3. Bxf7+ Kxf7 4. Qh5+ g6 5. Qxe5 Nxe5 6. d4 Ng4 7. h4 Nh2 8. Rxh2 d5 9. Bg5 Qxg5 10. hxg5 Bf5 11. gxf6 Bc5 12. exf5 Bxd4 13. Rxh7+ Rxh7 14. fxg6+ Kxg6 15. Nf3 Rf8 16. Nxd4 Rxf6 17. Nd2 Rh1+ 18. Nxh1 Rf4 19. Ng3 Kg5 (Note: There may be even more efficient solutions, but this is the best I was able to come up with. mPLUG-Owl (Incorrect) Yes, it is possible to beat the given game of chess with 19 moves. The game is in a drawish position, and it's possible to force a win by sacrificing a piece and exploiting some of the tactics present in the game. Here are the moves to beat the game: 1. e4 Nc6 2. Bb5 e5 3. Bxc6 dxc6 4. Nf3 Bg4 5. h4 Bxf3 6. Rh3 Bxd1 7. Kxd1 f5 8. Rh4 fxg4 9. d4 Ba3 10. Bg5 Qd5 11. Bxf6 Qb3 12. bxa3 Rd8 13. Bxd8 Kf7 14. axb3 Rxd8 15. Nc3 Rd5 16. exd5 Kf6 17. Nb5 cxb5 18. Kd2 Kf5 19. Re1 Kf4 (19. Re4+ Kxe4 would lead to a stalemate as white can capture the king with checkmate on the next turn. The move 19. Re1 Kf4 is therefore a better option to avoid stalemate).", "figure_data": "CategoryChessQuestionDevouring NobilityContextProvide, with", "figure_id": "tab_14", "figure_label": "18", "figure_type": "table" }, { "figure_caption": "1. e4 Nc6 2. Bb5 e5 3. Bxc6 dxc6 4. Nf3 Bg4 5. h4 Bxf3 6. Rh3 Bxd1 7. Kxd1 f5 8. Rg3 Nf6 9. Rg4 fxg4 10. d4 Ba3 11. Bg5 Qd5 12. Bxf6 Qb3 13. bxa3 Rd8 14. Bxd8 Kf7 15. axb3 Rxd8 16. Nc3 Rd5 17. exd5 Kf6 18. Nb5 cxb5 19. Kd2 Kf5 20. Re1 Kf4 21. Re4+ Kxe4", "figure_data": "", "figure_id": "tab_15", "figure_label": "", "figure_type": "table" } ]
Chongyan Chen; Mengchen Liu; Noel Codella; Yunsheng Li; Lu Yuan; Danna Gurari
[ { "authors": "S Aggarwal; D Mandowara; V Agrawal; D Khandelwal; P Singla; D Garg", "journal": "", "ref_id": "b0", "title": "Explanations for commonsenseqa: New dataset and models", "year": "2021" }, { "authors": "R Anantha; S Vakulenko; Z Tu; S Longpre; S Pulman; S Chappidi", "journal": "", "ref_id": "b1", "title": "Open-domain question answering goes conversational via question rewriting", "year": "2020" }, { "authors": "S Antol; A Agrawal; J Lu; M Mitchell; D Batra; C Lawrence Zitnick; D Parikh", "journal": "", "ref_id": "b2", "title": "Vqa: Visual question answering", "year": "2015" }, { "authors": "A F Biten; R Litman; Y Xie; S Appalaraju; R Manmatha", "journal": "", "ref_id": "b3", "title": "Latr: Layout-aware transformer for scene-text vqa", "year": "2022" }, { "authors": "N Bitton-Guetta; Y Bitton; J Hessel; L Schmidt; Y Elovici; G Stanovsky; R Schwartz", "journal": "", "ref_id": "b4", "title": "Breaking common sense: Whoops! a vision-and-language benchmark of synthetic and compositional images", "year": "2023" }, { "authors": "V Bolotova; V Blinov; F Scholer; W B Croft; M Sanderson", "journal": "", "ref_id": "b5", "title": "Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval", "year": "2022" }, { "authors": "E Brady; M R Morris; Y Zhong; S White; J P Bigham", "journal": "", "ref_id": "b6", "title": "Visual challenges in the everyday lives of blind people", "year": "2013" }, { "authors": "M Byeon; B Park; H Kim; S Lee; W Baek; S Kim", "journal": "", "ref_id": "b7", "title": "Coyo-700m: Image-text pair dataset", "year": "2022" }, { "authors": "B Byrne; K Krishnamoorthi; C Sankar; A Neelakantan; D Duckworth; S Yavuz; B Goodrich; A Dubey; A Cedilnik; K Y Kim", "journal": "", "ref_id": "b8", "title": "Taskmaster-1: Toward a realistic and diverse dialog dataset", "year": "2019" }, { "authors": "B B Cambazoglu; L Tavakoli; F Scholer; M Sanderson; B Croft", "journal": "", "ref_id": "b9", "title": "An intent taxonomy for questions asked in web search", "year": "2021" }, { "authors": "O M Camburu; B Shillingford; P Minervini; T Lukasiewicz; P Blunsom", "journal": "", "ref_id": "b10", "title": "Make up your mind! adversarial generation of inconsistent natural language explanations", "year": "2019" }, { "authors": "S Changpinyo; P Sharma; N Ding; R Soricut", "journal": "", "ref_id": "b11", "title": "Conceptual 12m: Pushing web-scale imagetext pre-training to recognize long-tail visual concepts", "year": "2021" }, { "authors": "L Chen; D Zhang; L Mark", "journal": "", "ref_id": "b12", "title": "Understanding user intent in community question answering", "year": "2012" }, { "authors": "X Chen; H Fang; T Y Lin; R Vedantam; S Gupta; P Dollár; C L Zitnick", "journal": "", "ref_id": "b13", "title": "Microsoft coco captions: Data collection and evaluation server", "year": "2015" }, { "authors": "Y Chen; H Hu; Y Luan; H Sun; S Changpinyo; A Ritter; M W Chang", "journal": "", "ref_id": "b14", "title": "Can pre-trained vision and language models answer visual information-seeking questions?", "year": "2023" }, { "authors": "H W Chung; L Hou; S Longpre; B Zoph; Y Tay; W Fedus; Y Li; X Wang; M Dehghani; S Brahma", "journal": "", "ref_id": "b15", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "K Cobbe; V Kosaraju; M Bavarian; M Chen; H Jun; L Kaiser; M Plappert; J Tworek; J Hilton; R Nakano", "journal": "", "ref_id": "b16", "title": "Training verifiers to solve math word problems", "year": "2021" }, { "authors": "W Dai; J Li; D Li; A M H Tiong; J Zhao; W Wang; B Li; P Fung; S Hoi", "journal": "", "ref_id": "b17", "title": "Instructblip: Towards general-purpose vision-language models with instruction tuning", "year": "2023" }, { "authors": "Z Dai; A T Chaganty; V Y Zhao; A Amini; Q M Rashid; M Green; K Guu", "journal": "PMLR", "ref_id": "b18", "title": "Dialog inpainting: Turning documents into dialogs", "year": "2022" }, { "authors": "D Elliott; F Keller", "journal": "", "ref_id": "b19", "title": "Image description using visual dependency representations", "year": "2013" }, { "authors": "S Exchange", "journal": "", "ref_id": "b20", "title": "Gpt-4v pricing", "year": "2023" }, { "authors": "Y Fang; W Wang; B Xie; Q Sun; L Wu; X Wang; T Huang; X Wang; Y Cao", "journal": "", "ref_id": "b21", "title": "Eva: Exploring the limits of masked visual representation learning at scale", "year": "2023" }, { "authors": "H Fu; Y Fan", "journal": "", "ref_id": "b22", "title": "Music information seeking via social q&a: An analysis of questions in music stackexchange community", "year": "2016" }, { "authors": "L Gao; S Biderman; S Black; L Golding; T Hoppe; C Foster; J Phang; H He; A Thite; N Nabeshima", "journal": "", "ref_id": "b23", "title": "The pile: An 800gb dataset of diverse text for language modeling", "year": "2020" }, { "authors": "M Geva; D Khashabi; E Segal; T Khot; D Roth; J Berant", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b24", "title": "Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies", "year": "2021" }, { "authors": "O González-Chávez; G Ruiz; D Moctezuma; T Ramirez-Delreal", "journal": "Signal Processing: Image Communication", "ref_id": "b25", "title": "Are metrics measuring what they should? an evaluation of image captioning task metrics", "year": "2024" }, { "authors": "Y Goyal; T Khot; D Summers-Stay; D Batra; D Parikh", "journal": "", "ref_id": "b26", "title": "Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering", "year": "2017" }, { "authors": "D Gurari; K Grauman", "journal": "", "ref_id": "b27", "title": "Crowdverge: Predicting if people will agree on the answer to a visual question", "year": "2017" }, { "authors": "D Gurari; Q Li; A J Stangl; A Guo; C Lin; K Grauman; J Luo; J P Bigham", "journal": "", "ref_id": "b28", "title": "Vizwiz grand challenge: Answering visual questions from blind people", "year": "2018" }, { "authors": "D Gurari; Y Zhao; M Zhang; N Bhattacharya", "journal": "", "ref_id": "b29", "title": "Captioning images taken by people who are blind", "year": "2020" }, { "authors": "F M Harper; J Weinberg; J Logie; J A Konstan", "journal": "First Monday", "ref_id": "b30", "title": "Question types in social q&a sites", "year": "2010" }, { "authors": "B He; M Xia; X Yu; P Jian; H Meng; Z Chen", "journal": "IEEE", "ref_id": "b31", "title": "An educational robot system of visual question answering for preschoolers", "year": "2017" }, { "authors": "J Hessel; A Holtzman; M Forbes; R L Bras; Y Choi", "journal": "", "ref_id": "b32", "title": "CLIPScore: a reference-free evaluation metric for image captioning", "year": "2021" }, { "authors": "H Hu; Y Luan; Y Chen; U Khandelwal; M Joshi; K Lee; K Toutanova; M W Chang", "journal": "", "ref_id": "b33", "title": "Open-domain visual entity recognition: Towards recognizing millions of wikipedia entities", "year": "2023" }, { "authors": "D A Hudson; C D Manning", "journal": "", "ref_id": "b34", "title": "Gqa: A new dataset for real-world visual reasoning and compositional question answering", "year": "2019" }, { "authors": "K Ignatova; C Toprak; D Bernhard; I Gurevych", "journal": "Tagungsband des GSCL Symposiums 'Sprachtechnologie und eHumanities", "ref_id": "b35", "title": "Annotating question types in social q&a sites", "year": "2009" }, { "authors": "L Jing; R Li; Y Chen; M Jia; X Du", "journal": "", "ref_id": "b36", "title": "Faithscore: Evaluating hallucinations in large vision-language models", "year": "2023" }, { "authors": "J Johnson; B Hariharan; L Van Der Maaten; L Fei-Fei; C Lawrence Zitnick; R Girshick", "journal": "", "ref_id": "b37", "title": "Clevr: A diagnostic dataset for compositional language and elementary visual reasoning", "year": "2017" }, { "authors": "T Khot; P Clark; M Guerquin; P Jansen; A Sabharwal", "journal": "", "ref_id": "b38", "title": "Qasc: A dataset for question answering via sentence composition", "year": "2020" }, { "authors": "C Kofler; M Larson; A Hanjalic", "journal": "ACM Computing Surveys (CSUR)", "ref_id": "b39", "title": "User intent in multimedia search: a survey of the state of the art and future challenges", "year": "2016" }, { "authors": "R Krishna; Y Zhu; O Groth; J Johnson; K Hata; J Kravitz; S Chen; Y Kalantidis; L J Li; D A Shamma", "journal": "International journal of computer vision", "ref_id": "b40", "title": "Visual genome: Connecting language and vision using crowdsourced dense image annotations", "year": "2017" }, { "authors": "M Lamm; J Palomaki; C Alberti; D Andor; E Choi; L B Soares; M Collins", "journal": "Transactions of the Association for computational Linguistics", "ref_id": "b41", "title": "Qed: A framework and dataset for explanations in question answering", "year": "2021" }, { "authors": "J Lei; L Yu; M Bansal; T Berg", "journal": "", "ref_id": "b42", "title": "Tvqa: Localized, compositional video question answering", "year": "2018" }, { "authors": "A Lewkowycz; A Andreassen; D Dohan; E Dyer; H Michalewski; V Ramasesh; A Slone; C Anil; I Schlag; T Gutman-Solo", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b43", "title": "Solving quantitative reasoning problems with language models", "year": "2022" }, { "authors": "B Li; Y Zhang; L Chen; J Wang; J Yang; Z Liu", "journal": "", "ref_id": "b44", "title": "Otter: A multi-modal model with in-context instruction tuning", "year": "2023" }, { "authors": "J Li; D Li; S Savarese; S Hoi", "journal": "", "ref_id": "b45", "title": "Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models", "year": "2023-09-19" }, { "authors": "Y Li; D Choi; J Chung; N Kushman; J Schrittwieser; R Leblond; T Eccles; J Keeling; F Gimeno; A Dal Lago", "journal": "Science", "ref_id": "b46", "title": "Competition-level code generation with alphacode", "year": "2022" }, { "authors": "C Y Lin", "journal": "Text summarization branches out", "ref_id": "b47", "title": "Rouge: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "W Ling; D Yogatama; C Dyer; P Blunsom", "journal": "", "ref_id": "b48", "title": "Program induction by rationale generation: Learning to solve and explain algebraic word problems", "year": "2017" }, { "authors": "F Liu; K Lin; L Li; J Wang; Y Yacoob; L Wang", "journal": "", "ref_id": "b49", "title": "Mitigating hallucination in large multi-modal models via robust instruction tuning", "year": "2023" }, { "authors": "H Liu; C Li; Q Wu; Y J Lee", "journal": "", "ref_id": "b50", "title": "Visual instruction tuning", "year": "2023-09-19" }, { "authors": "Z Liu", "journal": "", "ref_id": "b51", "title": "Understanding and modeling user behavior in social question and answering", "year": "2015" }, { "authors": "P Lu; H Bansal; T Xia; J Liu; C Li; H Hajishirzi; H Cheng; K W Chang; M Galley; J Gao", "journal": "", "ref_id": "b52", "title": "Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts", "year": "2023" }, { "authors": "K Marino; M Rastegari; A Farhadi; R Mottaghi", "journal": "", "ref_id": "b53", "title": "Ok-vqa: A visual question answering benchmark requiring external knowledge", "year": "2019-06" }, { "authors": "M Mathew; V Bagal; R Tito; D Karatzas; E Valveny; C Jawahar", "journal": "", "ref_id": "b54", "title": "Infographicvqa", "year": "2022-01" }, { "authors": "M Mathew; D Karatzas; C Jawahar", "journal": "", "ref_id": "b55", "title": "Docvqa: A dataset for vqa on document images", "year": "2021" }, { "authors": "A Mishra; S Shekhar; A K Singh; A Chakraborty", "journal": "IEEE", "ref_id": "b56", "title": "Ocr-vqa: Visual question answering by reading text in images", "year": "2019" }, { "authors": "N Naik; C Potts; E Kreiss", "journal": "", "ref_id": "b57", "title": "Context-vqa: Towards context-aware and purposeful visual question answering", "year": "2023" }, { "authors": "L Nie; M Wang; Z Zha; G Li; T S Chua", "journal": "", "ref_id": "b58", "title": "Multimedia answering: enriching text qa with media information", "year": "2011" }, { "authors": "J Novikova; O Dušek; A C Curry; V Rieser", "journal": "", "ref_id": "b59", "title": "Why we need new evaluation metrics for nlg", "year": "2017" }, { "authors": "Y Onoe; M J Zhang; E Choi; G Durrett", "journal": "", "ref_id": "b60", "title": "Creak: A dataset for commonsense reasoning over entity knowledge", "year": "2021" }, { "authors": "", "journal": "OpenAI", "ref_id": "b61", "title": "Gpt-4 technical report", "year": "2023" }, { "authors": "V Ordonez; G Kulkarni; T Berg", "journal": "Advances in neural information processing systems", "ref_id": "b62", "title": "Im2text: Describing images using 1 million captioned photographs", "year": "2011" }, { "authors": "A Radford; J W Kim; C Hallacy; A Ramesh; G Goh; S Agarwal; G Sastry; A Askell; P Mishkin; J Clark", "journal": "PMLR", "ref_id": "b63", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "C Raffel; N Shazeer; A Roberts; K Lee; S Narang; M Matena; Y Zhou; W Li; P J Liu", "journal": "J. Mach. Learn. Res", "ref_id": "b64", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "T Saikh; T Ghosal; A Mittal; A Ekbal; P Bhattacharyya", "journal": "International Journal on Digital Libraries", "ref_id": "b65", "title": "Scienceqa: a novel resource for question answering on scholarly articles", "year": "2022" }, { "authors": "V Sanh; A Webson; C Raffel; S H Bach; L Sutawika; Z Alyafeai; A Chaffin; A Stiegler; T L Scao; A Raja", "journal": "", "ref_id": "b66", "title": "Multitask prompted training enables zero-shot task generalization", "year": "2021" }, { "authors": "C Schuhmann; R Beaumont; R Vencu; C Gordon; R Wightman; M Cherti; T Coombes; A Katta; C Mullis; M Wortsman", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b67", "title": "Laion-5b: An open large-scale dataset for training next generation image-text models", "year": "2022" }, { "authors": "C Schuhmann; R Vencu; R Beaumont; R Kaczmarczyk; C Mullis; A Katta; T Coombes; J Jitsev; A Komatsuzaki", "journal": "", "ref_id": "b68", "title": "Laion-400m: Open dataset of clip-filtered 400 million imagetext pairs", "year": "2021" }, { "authors": "P Sharma; N Ding; S Goodman; R Soricut", "journal": "", "ref_id": "b69", "title": "Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning", "year": "2018" }, { "authors": "A Singh; V Natarajan; M Shah; Y Jiang; X Chen; D Batra; D Parikh; M Rohrbach", "journal": "", "ref_id": "b70", "title": "Towards vqa models that can read", "year": "2019" }, { "authors": "I Srba; M Bielikova", "journal": "ACM Transactions on the Web (TWEB)", "ref_id": "b71", "title": "A comprehensive survey and classification of approaches for community question answering", "year": "2016" }, { "authors": "H Toba; Z Y Ming; M Adriani; T S Chua", "journal": "Information Sciences", "ref_id": "b72", "title": "Discovering high quality answers in community question answering archives using a hierarchy of classifiers", "year": "2014" }, { "authors": "H Touvron; T Lavril; G Izacard; X Martinet; M A Lachaux; T Lacroix; B Rozière; N Goyal; E Hambro; F Azhar", "journal": "", "ref_id": "b73", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "H Touvron; L Martin; K Stone; P Albert; A Almahairi; Y Babaei; N Bashlykov; S Batra; P Bhargava; S Bhosale", "journal": "", "ref_id": "b74", "title": "Llama 2: Open foundation and fine-tuned chat models", "year": "2023" }, { "authors": "C Wang; S Liang; Y Zhang; X Li; T Gao", "journal": "", "ref_id": "b75", "title": "Does it make sense? and why? a pilot study for sense making and explanation", "year": "2019" }, { "authors": "Y Wang; S Mishra; P Alipoormolabashi; Y Kordi; A Mirzaei; A Arunkumar; A Ashok; A S Dhanasekaran; A Naik; D Stap", "journal": "", "ref_id": "b76", "title": "Benchmarking generalization via in-context instructions on 1,600+ language tasks", "year": "2022" }, { "authors": "J Wei; M Bosma; V Y Zhao; K Guu; A W Yu; B Lester; N Du; A M Dai; Q V Le", "journal": "", "ref_id": "b77", "title": "Finetuned language models are zero-shot learners", "year": "2021" }, { "authors": "Q Wu; D Teney; P Wang; C Shen; A Dick; A Van Den Hengel", "journal": "Computer Vision and Image Understanding", "ref_id": "b78", "title": "Visual question answering: A survey of methods and datasets", "year": "2017" }, { "authors": "F Xu; Y Song; M Iyyer; E Choi", "journal": "", "ref_id": "b79", "title": "A critical evaluation of evaluations for long-form question answering", "year": "2023" }, { "authors": "P Xu; W Shao; K Zhang; P Gao; S Liu; M Lei; F Meng; S Huang; Y Qiao; P Luo", "journal": "", "ref_id": "b80", "title": "Lvlm-ehub: A comprehensive evaluation benchmark for large vision-language models", "year": "2023" }, { "authors": "Z Yang; L Li; K Lin; J Wang; C C Lin; Z Liu; L Wang", "journal": "", "ref_id": "b81", "title": "The dawn of lmms: Preliminary explorations with gpt-4v (ision)", "year": "2023" }, { "authors": "M Yasunaga; P Liang", "journal": "PMLR", "ref_id": "b82", "title": "Graph-based, self-supervised program repair from diagnostic feedback", "year": "2020" }, { "authors": "Q Ye; H Xu; G Xu; J Ye; M Yan; Y Zhou; J Wang; A Hu; P Shi; Y Shi; C Jiang; C Li; Y Xu; H Chen; J Tian; Q Qi; J Zhang; F Huang", "journal": "", "ref_id": "b83", "title": "mplug-owl: Modularization empowers large language models with multimodality", "year": "2023" }, { "authors": "X Zeng; Y Wang; T Y Chiu; N Bhattacharya; D Gurari", "journal": "", "ref_id": "b84", "title": "Vision skills needed to answer visual questions", "year": "2020" }, { "authors": "T Zhang; V Kishore; F Wu; K Q Weinberger; Y Artzi", "journal": "", "ref_id": "b85", "title": "Bertscore: Evaluating text generation with bert", "year": "2019" }, { "authors": "L Zheng; W L Chiang; Y Sheng; S Zhuang; Z Wu; Y Zhuang; Z Lin; Z Li; D Li; E Xing", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b86", "title": "Judging llm-as-a-judge with mt-bench and chatbot arena", "year": "2024" }, { "authors": "D Zhu; J Chen; X Shen; X Li; M Elhoseiny", "journal": "", "ref_id": "b87", "title": "Minigpt-4: Enhancing vision-language understanding with advanced large language models", "year": "2023-09-19" } ]
[ { "formula_coordinates": [ 5, 120.77, 102.06, 102.26, 8.06 ], "formula_id": "formula_0", "formula_text": "Q A Nimg # Auth." }, { "formula_coordinates": [ 18, 35.78, 490.18, 333.47, 88.64 ], "formula_id": "formula_1", "formula_text": "- - - Comparison - - Definition - Definition - - - Find - - - Resource - - Language - - - Language - - - - - - Temporal - - - - - - Calculation - - - - - Attribute - - - - - - List - - - - - - Weather - - - - f - Location - Explain - - - - - - - - - - - - Request research Other - - - - - -" } ]
10.5244/C.31.135
2023-11-27
[ { "figure_ref": [ "fig_0" ], "heading": "I. INTRODUCTION", "publication_ref": [ "b6", "b2", "b5", "b57", "b34", "b35", "b38", "b66", "b7", "b15", "b48", "b67", "b0", "b20", "b65", "b70", "b45", "b26", "b61", "b23", "b50", "b52", "b56", "b11", "b1", "b32", "b70" ], "table_ref": [], "text": "P ERSON re-identification (ReID) aims to search the target person among the gallery set across multiple cameras. It can benefit many crucial tasks such as multi-object tracking [7], crowd counting [3] and action analysis [6]. With the development of deep learning and efforts of researchers, a large number of ReID models have been proposed in recent years [58], including representation learning [35], [36], [39], [67], metric learning [8], [16], [49], [68], and ranking optimization [1], [21], [66], [71].\nMost existing ReID models focus on the visible-visible matching problem. However, they are not workable in low illumination. To meet the need for 24-hour surveillance systems, visible-infrared ReID (VI-ReID) has recently received substantial attention. Wu et al. [46] first proposed the VI-ReID problem, and constructed the dataset SYSU-MM01. RegDB [27] was also a commonly used and relatively small dataset with RGB-IR sample pairs collected by dual camera system. Yunhao Du, Cheng Lei, Zhicheng Zhao, Yuan Dong and Fei Su are with Beijing Key Laboratory of Network System and Network Culture, School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, China. (e-mail:{dyh bupt, mr.leicheng, zhaozc, yuandong, sufei}@bupt.edu.cn) (Corresponding author: Zhicheng Zhao.) Fig. 1. The radar chart of comparisons between our constructed BUPTCampus and other common datasets. Six aspects are considered, i.e., the number of identities, the number of images, whether to include cross-modality pairs, data type (image or video), data modality(RGB or RGB+IR), whether to include auxiliary samples. Detailed quantitative comparisons are listed in Tab.I.\nRecently, Zhang et al. [62] collected a new challenging lowlight VI-ReID dataset LLCM with significant light changes. Different from these image-based datasets, Lin et al. [24] contributed a video-based VI-ReID dataset HITSZ-VCM to help exploit the temporal information. However, all these datasets are limited by the data scale, style, or lack of paired samples.\nIn this paper, we concentrate on the video-based VI-ReID problem. We first collect a new dataset, dubbed BUPTCampus, which distinguishes itself from previous datasets in the following aspects:\n1) Instead of images, it collects tracklets as samples, enabling to exploit the temporal cues.\n2) It contains pixel-aligned RGB/IR sample pairs captured by binocular cameras, which can facilitate modalityinvariant learning.\n3) It is much larger than existing VI-ReID datasets with 3,080 identities, 16,826 tracklets and 1,869,366 images. Different styles of cameras are used to ensure the diversity of samples (see Tab.II for details).\nMoreover, existing ReID tasks focus on the cross-camera matching problem, and those identities who appear only once are commonly ignored in training. However, these samples are easy to obtain in reality, and would help the learning. We take vast single-camera samples into consideration, and call them auxiliary samples, and accordingly, the main training samples, i.e., multiple-camera samples, are called primary samples. The comparision between BUPTCampus and several common ReID datasets are shown in Fig. 1 (Please refer to Tab.I for details). The main difficult cases in BUPTCampus are visualized in Fig. 2, including variations of modality/resolution/camera/pose/illumination/view, occlusion, misalignment, and detection noise.\nBased on the collected dataset, we construct a two-stream framework as baseline following previous works [51]- [53], [57]. To alleviate the differences between visible and infrared modalities, a GAN [12] module (named PairGAN) is applied to generate fake IR samples from real RGB samples. Furthermore, different from previous methods which ignore single-camera samples, we propose to train primary samples and auxiliary ones jointly, and design a dynamic factor to balance the weights of these two sets in the spirit of curriculum learning [2]. Moreover, the commonly used re-ranking methods [33], [71] directly process the coarse instance-level features. Instead, we propose the temporal k-reciprocal reranking algorithm, which takes fine-grained temporal correlations into consideration with the cross-temporal operation. The overall method, called AuxNet, improves the performance of baseline by approximately 10% Rank1 and 10% mAP. We further reproduce 9 state-of-the-art image-based and videobased VI-ReID methods on BUPTCampus, and our methods show substantial superiority to all these solutions.\nThe contributions of our work are summarized as follows:\n• We construct a large-scale video-based VI-ReID dataset with cross-modality sample pairs and auxiliary samples.\nTo the best of our knowledge, this is the first work of collecting paired RGB-IR tracklets, and additionally, it is also the first work to assist cross-camera matching with single-camera samples. • We present a baseline for the video-based VI-ReID task, and introduce GAN to alleviate the modality variations. • We design a simple but effective two-branch framework to train primary and auxiliary samples jointly. Meanwhile, curriculum learning is introduced to learn a dynamic factor to balance these two branches. • A novel temporal k-reciprocal re-ranking algorithm is proposed, which exploits fine-grained temporal cues with the cross-temporal operation. • 9 state-of-the-art VI-ReID methods are reproduced on BUPTCampus, and our method outperforms them by a large margin." }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Visible-Infrared Person Re-Identification", "publication_ref": [ "b24", "b51", "b52", "b56", "b58", "b3", "b10", "b53", "b4", "b12", "b41", "b44", "b50", "b40", "b11", "b42", "b18", "b69", "b60", "b16", "b30", "b23" ], "table_ref": [], "text": "Generally speaking, there are two main categories of methods in VI-ReID: shared feature learning methods and feature compensation learning.\nThe shared feature learning methods aim to embed the features from different modalities into the same feature space, where modality-specific features are abandoned and only modality-shared features are reserved. Partially shared twostream networks [25], [52], [53], [57], [59] were commonly used to learn modality-shared feature space, in which the network parameters of shallow layers are specific, and the deep layers are shared among different modalities. Chen et al. [4] studied the neural feature serach paradigm to select features automatically. Fu et al. [11] designed a nerual architecture search method to find the optimal separation scheme for each Batch Normalization(BN) layer. To handle both crossmodality and intra-modality variations simultaneously, Ye et al. [54] proposed the bi-directional dual-constrained topranking loss to learn discriminative embeddings. Some other works exploited the modality adversarial learning to mitigate the modality differences [5], [13], [42], [45], [51], and they typically adopted a modality classifier to identify the modality of output features.\nThe feature compensation learning methods try to make up the missing modality-specific cues from one modality to another. Wang et al. [41] emploied Generative Adversarial Network(GAN) [12] to jointly perform alignment in pixellevel and feature-level. To realize the discrepancy reduction in image-level, Wang et al. [43] exploited two variational autoencoders (VAEs) [19] to generate the multi-spectral images. Zhong et al. [70] adpoted the intermediate grayscale images as auxiliary information to transfer infrared images to visible images. The modality synergy module and modality complement module [61] were designed to synergize and complement the diverse semantics of the two modalities. Huang et al. [17] unveiled the modality bias training problem and applied the GCGAN [31] to generate the third modality, which balances the information between RGB and IR.\nRecently, Lin et al. [24] proposed the video-based VI-ReID task, and designed an unified framework to learn modalinvariant and motion-invariant subspace. In this paper, we present a new video-based VI-ReID framework, in which GAN is applied to perform compensation learning, and the twostream framework is used to learn modality-shared features." }, { "figure_ref": [], "heading": "B. Auxiliary Learning", "publication_ref": [ "b37", "b59", "b27", "b9", "b50", "b22", "b72", "b14" ], "table_ref": [], "text": "Auxiliary learning aims to find or design auxiliary tasks which can improve the performance on the primary task. It only pursues high performance of the primary task and the role of the auxiliary task is an assistant. Toshniwal et al. proposed to use lower-level tasks as auxiliary tasks to improve the performance of speech recognition [38]. Zhai et al. presented the S 4 L-Rotation strategy, which assisted semisupervised image classification with a rotation prediction task [60]. For fine-grained classification, Niu et al. exploited the knowledge from auxiliary categories with well-labeled data [28]. In this paper, we regard the auxiliary task as a degraded and simplified version of the primary task, and gradually reduce the amplitude of its gradients in training.\nIn the re-identification task, auxiliary learning was often implemented as an extra classification task. Considering that ReID suffers from viewpoint changes, Feng et al. applied a view classifier to learn view-related information [10]. For cross-modality ReID, Ye et al. introduced a modality classifier to discriminate the features from two different modalities [51]. Li et al. added a self-supervised learning branch for image rotation classification to help discover geometric features [23]. In occluded ReID task, the occluded/non-occluded binary classification (OBC) loss was proposed to determine whether a sample was from an occluded person distribution or not [73]. Instead of using classifiers, He et al. incorporated nonvisual clues through learnable embeddings to alleviate the data bias brought by cameras or viewpoints in TransReID [15]. Different from them, in this work, we take the largescale single-camera samples as the auxiliary set, which is easy to collect with low labeling cost, but is generally ignored in previous fully-supervised ReID works. We will show that minor modifications to common learning frameworks can bring remarkable improvements if the auxiliary set is used well. " }, { "figure_ref": [], "heading": "C. Rank Optimization", "publication_ref": [ "b19", "b54", "b70", "b32", "b31" ], "table_ref": [], "text": "Rank optimization typically acts as the post-processing procedure in the ReID task. Given one or more ranking lists, it revises the ranking order to improve the retrieval performance. Leng et al. [20] proposed the bi-directional ranking algorithm, which first performed forward ranking and backward ranking, and then computed the final ranking list in accordance with both content and context similarities. Ye et al. [55] used both similarity and dissimilarity cues to optimize the ranking list. K-reciprocal [71] was one of the most common used reranking methods in recent years, which adopted the Jaccard distances of the k-reciprocal sample sets to complement the initial feature distances. Sarfraz et al. [33] proposed the expanded cross neighborhood distances between sample pairs to exploit neighbor cues. Yu et al. [32] designed a \"divide and fuse\" strategy, which divided the features into multiple parts firstly, and then encoded the neighborhood relation in the subspaces. Finally, these sparse feature vectors were fused with a fuzzy aggregation operator, which exploited the diversity from different parts of high-dimensional features. However, these methods are not designed for the video-based ReID task, and the temporal information is not explicitly explored. In this work, we improve the k-reciprocal re-ranking algorithm by exploiting fine-grained temporal correlations, and prove its effectiveness for video-based settings." }, { "figure_ref": [], "heading": "III. PROPOSED DATASET", "publication_ref": [ "b33", "b21", "b43", "b47", "b63", "b64", "b68", "b26", "b45", "b61", "b23", "b64" ], "table_ref": [], "text": "The BUPTCampus dataset is constructed for video-based visible-infrared person re-identification. We adopt six binocular bimodal cameras to capture RGB and IR modalities simultaneously with approximate pixel-alignment. To ensure the diversity of samples, different cameras and engines are used to capture videos with various color styles, resolutions, frame rates and binocular synchronization modes, as shown in Tab.II. The topologies of cameras are shown in Fig. 4. For labeling, all RGB videos are processed by the multiobject tracking algorithm SiamMOT [34] to predict tracklets, which are further revised manually. Then cross-camera IDs are annotated manually. Benefiting from the synchronization mode of bimodal cameras, the bounding boxes and IDs of IR samples are automatically generated with RGB labels.\nThe resulting dataset contains 3,080 identities, 1,869,066 images and 16,826 trajectories (111 images per trajectory on average). Each identity appears in 1 to 6 cameras, and the proportion distribution is shown in Fig. 3. The multiple-camera samples are randomly split into the training set (primary set, 1,074 IDs) and testing set (1,076 IDs). Note that there are 29.59% identities only appear in one camera. These samples are at fingertips in reality, but are generally ignored in existing works. Instead, we regard them as the auxiliary set (930 IDs) to assist optimization procedure.\nWe compare BUPTCampus with common ReID datasets in Tab.I. Most previous image-based and video-based datasets only contain the visible modality [22], [44], [48], [64], [65], [69], which limits the applications in 24-hour surveillance systems. RegDB [27], SYSU-MM01 [46] and LLCM [62] contain both RGB and IR modalities, but are flawed in data type (image) and scales (400 to 1,000 IDs). HITZS-VCM [24] is a recently proposed dataset which supports the video-based setting. Compared with it, BUPTCampus has the following advantages: 1) much larger scale; 2) containing auxiliary samples; 3) approximate pixel-alignment between RGB/IR tracklets. In conclusion, we summarize the BUPTCampus as the first video-based VI-ReID dataset with paired samples and the auxiliary set.\nFor quantitative evaluation, we adopt the typical Cumulative Matching Characteristic curve (CMC) and mean Average Precision (mAP) as evaluation metrics [65]. CMC represents the expectation of the true match being found within the first n ranks, while it only considers the first match and cannot completely reflect the full retrieval results. mAP takes both precision and recall into account, and it is calculated as the area under the Precision-Recall curve for each query. Euclidean distance is used to calculate the matching cost of all query and gallery pairs to compute the evaluation metrics. Both \"visible to infrared\" and \"infrared to visible\" retrieval modes are utilized to achieve a more comprehensive evaluation." }, { "figure_ref": [ "fig_2" ], "heading": "IV. PROPOSED METHOD", "publication_ref": [], "table_ref": [], "text": "The overview of our proposed method is shown in Fig. 5. It is built of a two-stream network for baseline ( §IV-A), and adopts a GAN module to help modality-invariant learning ( §IV-B). Then an auxiliary learning framework ( §IV-C) is designed to train primary and auxiliary samples jointly in the spirit of curriculum learning. Finally, we propose the temporal k-reciprocal re-ranking algorithm ( §IV-D) to introduce finegrained temporal cues for ranking optimization." }, { "figure_ref": [], "heading": "A. Baseline", "publication_ref": [ "b56", "b13" ], "table_ref": [], "text": "The inputs of the network are RGB/IR tracklet pairs with the fixed length. The partially shared two-stream framework [57] with ResNet [14] is utilized as our baseline. Specifically, the first convolutional blocks in the two streams don't share weights to capture modality-specific low-level features. Differently, the parameters of deeper convolutional blocks are shared to learn modality-invariant high-level embeddings. To aggregate frame-level features from multiple input images, a temporal average pooling layer is adopted to obtain the final embeddings. In training, the softmax cross-entropy loss is calculated to learn modality-shared identity embeddings, which is denoted by:\nL 1 ce = - 1 N N i=1 log( exp(z i,i ) C k=1 exp(z i,k ) ),(1)\nwhere z i,k represents the output classification logits that an input sample x i is recognized as identity k. N is the number of tracklet samples. Meanwhile, the triplet loss with hard mining is also adopted. Mathematically, it is represented by The total learning objective of our baseline is\nL 1 triplet = N i=1 [m + max ∀yi=yj D(x i , x j ) -min ∀yi̸ =y k D(x i , x k )] + ,(2)\nL 1 = L 1 ce + L 1 triplet .(3)" }, { "figure_ref": [], "heading": "B. PairGAN", "publication_ref": [ "b8", "b16", "b39", "b40", "b69", "b71" ], "table_ref": [], "text": "GAN is widely used to alleviate the modality differences between RGB and IR samples in previous works [9], [17], [40], [41], [70]. Inspired by this, we insert a GAN module, named PairGAN, to translate real RGB images I rgb into fake IR images I ′ ir . It mainly consists of a generator G rgb→ir which learns a mapping from RGB to IR, and a discriminator D ir to distinguish between real and fake IR images. Benefiting from the pixel-aligned RGB/IR sample pairs collected in our dataset, the reconstruction loss can be directly applied to supervise the G rgb→ir as\nL recon = ||G rgb→ir (I rgb ) -I ir || 1 .(4)\nTo further guarantee that the generator doesn't lose content information, the cycle-consistency loss [72] is introduced, which supervises the mapped RGB(IR) images from fake IR(RGB) images as belows:\nL cycle =||G ir→rgb (G rgb→ir (I rgb )) -I rgb || 1 + ||G rgb→ir (G ir→rgb (I ir )) -I ir || 1 ,(5)\nwhere G ir→rgb is the generator which generates fake RGB images from IR images. Here the definition of the adversial loss for discriminators is not given for simplification. The overall loss L gan for PairGAN is the sum of reconstruction loss, cycle-consistency loss and adversial loss.\nAfter generating the fake IR tracklets, we input them to the network along with the corresponding real IR tracklets, and train them with the same loss as in Eq.3, denoted by\nL 2 = L 2 ce + L 2 triplet .(6)\nDuring inference, the two groups of cross-modal features trained by the two loss functions L 1 and L 2 are concatenated together to perform re-identification." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "C. Auxiliary Learning", "publication_ref": [ "b1" ], "table_ref": [], "text": "To exploit the auxiliary set in which each identity only appears in a single camera, the overall framework are designed as a multi-task learning procedure, i.e., primary task and auxiliary task. Specifically, we propose a two-branch framework to train primary samples and auxiliary samples jointly, as shown in Fig. 5(b). The primary branch represents the baseline network ( §IV-A, §IV-B) as shown in Fig. 5(a), and its loss function L primary is exactly the L 1 (L 2 ) as in Eq.3(Eq.6). The auxiliary branch shares the same structure and weights with the primary branch, but takes auxiliary samples as input, and its loss function L auxiliary is the same triplet loss with Eq.2. Finally, the total loss is calculated as the weighted sum of these two losses:\nL total = -α)L primary + αL auxiliary , (7\n)\nwhere α is the factor to balance the two tasks.\nThe intuitive approach is to set α as a fixed value. However, each identity in the auxiliary set only contains one pair of cross-modality tracklets, without cross-camera positive samples, which makes it less effective to the primary task. A big α will make the auxiliary task seriously interfere the learning of the primary task. Instead, a small α will lead to the cues in the auxiliary set not being fully mined.\nNote that the positive auxiliary samples are almost aligned in pixel-level without pose and view variations. Therefore, we regard the auxiliary task as a simplified version of the primary task. Inspired by the success of curriculum learning [2], we set α in Eq.7 as a dynamic curriculum factor as:\nα(E) = (cos(πE) + ϕ)/2(1 + ϕ),(8)\nwhere E ∈ [0, 1] is the normalized epoch index and ϕ is a predefined hyperparameter. For example, with ϕ = 3, the value of α is initially 0.5, and gradually cosine-decreasing to 0.25. In this way, the optimization weight of auxiliary task decreases as a continuous training progress. At the beginning of training, the model learns from both auxiliary samples (simple curriculum) and primary samples (difficult curriculum) equally. With α decreases, the difficulty of the \"curriculum\" gradually increases, and the optimization emphasis turns to the primary samples. In other words, the auxiliary samples provide an initial optimization direction for fast gradient descent and can accelerate and stabilize the optimization procedure." }, { "figure_ref": [ "fig_2" ], "heading": "D. Temporal K-reciprocal", "publication_ref": [ "b70", "b9", "b35", "b31" ], "table_ref": [], "text": "As discussed in section IV-A, given the i-th tracklet of length T as input, our network will first extract frame-level features Fi = { f t i } T t=1 . Then, a temporal average pooling operation is utilized to output the final embeddings f i .\nIn the testing stage, we denote the features of the i-th query samples and the j-th gallery samples as { Qi = {q t i } T t=1 , q i } and { Gj = {g t j } T t=1 , g j }, respectively. Then the Euclidean distance D f eat ∈ R M ×N between all query-gallery pairs can be calculated, where M and N is the number of query and gallery samples. Its (i, j)-th element is the feature distance between q i and g j as d f eat (i, j) = (q i -g j ) T (q i -g j ).\n(9)\nFinally, the retrieval list is obtained by ranking all gallery samples with D f eat for each query sample.\nIn the original k-reciprocal re-ranking algorithm [71], the k-reciprocal nearest neighbors for query q i are defined as R(q i , k) = {g j |(g j ∈ N (q i , k)) ∧ (q i ∈ N (g j , k))}, (10) where N (q i , k) is the k-nearest neighbors of q i . Then the new distance between q i and g j can be calculated by the Jaccard metric of their k-reciprocal sets as\nd jacc (i, j) = 1 - |R(q i , k) ∩ R(g j , k)| |R(q i , k) ∪ R(g j , k)| ,(11)\nwhere | • | denotes cardinality of the set. For simplification, the expanded k-reciprocal nearest neighbors and local query are not shown here. In implements, to speed up the calculation, the k-reciprocal neighbors are encoded into feature vectors, and the Jaccard distance in Eq.11 can be implemented by elementwise minimization and maximization. The final distance is defined as the weighted sum of original feature distance and the Jaccard distance as\nd rerank (i, j) = λ 1 d f eat (i, j) + (1 -λ 1 )d jacc (i, j),(12)\nwhere λ 1 ∈ [0, 1] denotes the penalty factor. The success of the k-reciprocal algorithm owes to its automatic gallery-to-gallery similarity mining, in which the rich contextual information latent in sample correlations is fully explored. However, it is designed in an instance-level manner, and the fine-grained frame-level cues are not exploited. To solve this problem, we introduce the cross-temporal operation to calculate the temporal correlations between query and gallery samples.\nSpecifically, for the i-th query, the frame-level featrue sets Qi = {q t i } T t=1 are evenly split into L groups along the temporal dimension, where the l-th group is Ql i = {q t i } (l+1)T /L t=lT /L+1 , l = 0, ..., L -1. Then the temporal average pooling is applied to aggregate frame-level features in each group respectively, resulting in L embeddings Q i = {q 0 i , .., q l i , .., q L-1 i }. Similarly, we can get the embedding set for the j-th gallery as G j = {g 0 i , .., g l i , .., g L-1 i }. Thus, the cross-temporal operation can be defined as\nd cross (i, j) = L-1 l=0 d jacc (q l i , g L-1-l j ),(13)\nwhere d jacc (•, •) is the Jaccard distance of k-reciprocal sets as in Eq.11. One concise ilustration is shown in Fig. 5(c). Therefore, the final distance in Eq.12 can be rewritten as\nd rerank = λ 1 d f eat + (1 -λ 1 )d jacc + λ 2 d cross ,(14)\nwhere the index (i, j) are hiddened here for clarity. We term the overall re-ranking algorithm as temporal k-reciprocal reranking.\nDiscussion. Compared with the initial k-reciprocal re-ranking algorithm, the introduced cross-temporal operation fully explores the temporal correlations between query and gallery samples in a more fine-grained manner. Each sub-feature q l i has smaller temporal receptive field and contains more detailed temporal information than q i . Thus, the cross-temporal operation can preserve more cues which may be smoothed by the global average pooling operation. One similar design is PCB [36], which horizontally split the input image into multiple parts to extract local fine-grained cues. PCB operates in the spatial domain, which needs to change the network design and induces more computational cost. Another related method is DaF [32], which divides one instance feature into multiple fragments along the feature dimension and explores the diversity among sub-features. Differently, our method operates along the temporal dimension and optimizes the ranking procedure. Moreover, it doesn't change the structure and training procedure of the network, and only increases negligible inference time." }, { "figure_ref": [], "heading": "V. EXPERIMENTAL RESULTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Empirical Settings", "publication_ref": [ "b29", "b13", "b17" ], "table_ref": [], "text": "We conduct experiments on the constructed BUPTCampus dataset, in which 1,074 IDs are used for primary learning, 930 IDs for auxiliary learning, and 1,076 IDs for testing. Rank1, Rank5, Rank10, Rank20 and mAP are adpoted as the evaluation metrics.\nWe implement our model by PyTorch [30] and train it on two NVIDIA TESLA T4 GPUs. ResNet-34 [14] is utilized as the backbone because it performs better than ResNet-50 in our experiments. Adam [18] is utilized as the optimizer with the weight decay of 1e-5. The learning rate is set as 2e-4 for initialization and updates with the cosine scheduler. We set the maximum number of epochs to 100 and the batch size to 32 with P = 8 identities and K = 4 tracklets per identity. The tracklet length is 10 and the image resolution is set to 256×128. We use the random sampling strategy for training and the uniform sampling strategy for testing. Random cropping and flipping are used for data augmentation. The margin m in triplet loss is set to 0.6, and the factor ϕ in Eq.8 is 3. For the temporal k-reciprocal re-ranking algorithm, the intrinsic hyperparameters in the original k-reciprocal algorithm are set as k1 = 5, k2 = 3 and λ 1 = 0.8. Then the newly added factor λ 2 is set to 0.1, and the number of groups is set as L = 2." }, { "figure_ref": [], "heading": "B. Ablation Study", "publication_ref": [ "b56", "b49", "b23", "b40", "b28", "b56", "b55", "b57", "b62", "b61", "b49", "b23" ], "table_ref": [], "text": "Main Results. Tab.III summarizes the overall ablation results of the main components of our methods, i.e., PairGAN, auxiliary learning and temporal re-ranking. We can draw the following observations:\n• The introduced PairGAN module can bring 4% to 6% Rank1 improvement and approximate 4% mAP improvement over baseline, which validates it can help the modality-invariant learning. • The designed auxiliary learning method improves Rank1 by 6% and mAP by 5% respectively, which indicates the effectiveness of exploiting the single-camera samples in training. • The temporal re-ranking algorithm can improve the mAP by 4%. That means it can effectively revise the ranking order with mined gallery-to-gallery similarities and temporal correlations. • These above components complement each other. By integrating them all, our final methods improve the performance of baseline by approximately 10% Rank1 and 10% mAP.\nSequence Length. Compared with image-based ReID, videobased ReID can expolit rich temporal information, which helps eliminate effects of noises and distractors. To verify the necessity of video data, we study the influence of sequence length in Tab.IV, and we have the following conclusions:\n1) A terrible result is obtained in case of setting sequence length to 1, which means the task degrades to imagebased ReID. That indicates the image-based methods cannot work well in our benchmark. 2) As the sequence length varies from 1 to 15, the performance is gradually improved, which demonstrates the effectiveness of temporal information. 3) When the sequence length is larger than 15, the Rank1 performance degrades under the \"visible-to-infrared\" mode, which shows that excessive length will bring more redundant information.\nTo achieve a balance between accuracy and efficiency, we set the sequence length to 10 by default. Auxiliary Learning. We compare different designs of the curriculum factor α in our auxiliary learning method in Tab.VI. Three types of design are listed, i.e., fixed value, exponential decline and cosine decline. For a fixed value, α = 0.3 achieves a trade-off between the primary task and auxiliary task. As to the exponential decline design, it achieves a similar performance to the fixed value. Differently, a cosine decline strategy performs much better, with an improvements of 2% mAP and 3% Rank1 over them.\nTemporal Re-ranking. The comparisions between the original k-reciprocal re-ranking algorithm and our proposed temporal re-ranking algorithm are listed in Tab.VII. Four different models are selected as the baseline methods, i.e., DDAG [57], DART [50], MITML [24] and our constructed baseline network. It is shown that the original k-reciprocal algorithm can consistently improve the mAP metrics, but sometimes decreases Rank1. Differently, our temporal re-rakning algorithm can further increase both mAP and Rank1 by a remarkable margin. Moreover, it only increases negligible inference time.\nWe further study the effects of parameter selection in Tab.VIII. As reported, the performance is robust to the varying values of k1, k2, λ 1 , λ 2 and L.\nC. Main Results.\nFor comprehensive comparisions, we reproduce 9 other state-of-the-art methods on BUPTCampus as shown in Tab.V. For image-based methods, i.e., AlignGAN [41], LbA [29], DDAG [57], CAJ [56], AGW [58], MMN [63], DEEN [62], DART [50], the default setting is the sequence length of 1 during inference. For a fair comparision, we also report their results with the length of 10, which is implemented by adding a temporal average pooling operation during inference. For the video-based method, MITML [24], we report the results with the length of 6, as it performs better than the length of 10 in our implements. The results indicate that our solution, dubbed AuxNet, shows substantial superiority to existing state-of-theart methods.\nFurthermore, we conduct experiments on the HITSZ-VCM dataset and the results are shown in Tab.IX. Please note that the proposed \"PairGAN\" and \"auxiliary learning\" methods cannot be utilized here. Therefore, only the results of baseline (based on ResNet50) and temporal re-ranking are givem. The parameters of temporal re-ranking are set to k1=20, k2=6, λ 1 =0.3 and λ 2 =0.4." }, { "figure_ref": [ "fig_3", "fig_7", "fig_5", "fig_6", "fig_8" ], "heading": "D. Visualization.", "publication_ref": [ "b25" ], "table_ref": [], "text": "Feature Distribution. In order to further analyze the performance difference between baseline and our AuxNet, we compute the feature distance distribution and feature space distribution. Fig. 6 6(d-f), we plot the feature embeddings in the 2D feature space for visualization using UMAP [26]. The results show that our AuxNet can better distinguish difficult negative samples (marked by red dotted ellipses), and greatly narrows the gap between the two modality samples with the same identity. PairGAN. Fig. 7 visualizes the sampled real RGB, generated fake IR by PairGAN and corresponding real IR images. The fake IR images preserve the detailed texture cues from the RGB modality, and have the similar color style with the IR modality. Therefore, it can help alleviate the differences between the two modalities and learn modality-invariant embeddings. We further visualize the distribution of samples of these three modalities in Fig. 8. It can be observed that compared to real RGB samples, the fake IR samples have a closer distribution to real IR samples. Ranking List. Fig. 9 compares the ranking lists of baseline and our AuxNet in both \"visible-to-infrared\" and \"infrared-to- visible\" settings. In the first case, given an RGB query sample (ID 1551), the baseline method retrieves two wrong samples in the top-5 ranking list. Instead, our AuxNet successfully revises them. Specifically, the positive sample under camera \"G25\" has huge modality and view differences from the query sample, and is correctly retrieved. In the second case with an IR query sample (ID 1953), our AuxNet achieves the right top-1 and top-2 matching results." }, { "figure_ref": [], "heading": "VI. CONCLUSIONS", "publication_ref": [], "table_ref": [], "text": "In this paper, we contribute a new benchmark for videobased visible-infrared person re-identification, named BUPT-Campus, which is the first dataset with RGB/IR tracklet pairs and auxiliary samples. It consists of 3,080 identities, 1,869,066 images and 16,826 tracklets. Furthermore, we construct a two-stream network as baseline and present the PairGAN module to help modality-invariant learning. To exploit the auxiliary samples, we design to train primary samples and auxiliary samples jointly with a curriculum factor. Finally, we propose a novel temporal k-reciprocal algorithm to re-rank the retrieval results with fine-grained temporal correlation cues. We demonstrate the effectiveness of our method by comparing it with 9 state-of-the-art works. We hope the contributed dataset and methods can help to narrow the gap between academic works and realistic applications." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "This work is supported by Chinese National Natural Science Foundation under Grants (62076033, U1931202) and BUPT Excellent Ph.D. Students Foundation (CX2022145)." } ]
Visible-infrared person re-identification (VI-ReID) aims to match persons captured by visible and infrared cameras, allowing person retrieval and tracking in 24-hour surveillance systems. Previous methods focus on learning from cross-modality person images in different cameras. However, temporal information and single-camera samples tend to be neglected. To crack this nut, in this paper, we first contribute a large-scale VI-ReID dataset named BUPTCampus. Different from most existing VI-ReID datasets, it 1) collects tracklets instead of images to introduce rich temporal information, 2) contains pixelaligned cross-modality sample pairs for better modality-invariant learning, 3) provides one auxiliary set to help enhance the optimization, in which each identity only appears in a single camera. Based on our constructed dataset, we present a twostream framework as baseline and apply Generative Adversarial Network (GAN) to narrow the gap between the two modalities. To exploit the advantages introduced by the auxiliary set, we propose a curriculum learning based strategy to jointly learn from both primary and auxiliary sets. Moreover, we design a novel temporal k-reciprocal re-ranking method to refine the ranking list with fine-grained temporal correlation cues. Experimental results demonstrate the effectiveness of the proposed methods. We also reproduce 9 state-of-the-art image-based and videobased VI-ReID methods on BUPTCampus and our methods show substantial superiority to them. The codes and dataset are available at: https://github.com/dyhBUPT/BUPTCampus.
Video-based Visible-Infrared Person Re-Identification with Auxiliary Samples
[ { "figure_caption": "Fig. 2 .2Fig. 2. Difficult cases in the constructed BUPTCampus dataset. \"var.\" is short for \"variations\". Binocular bimodal cameras are used to simultaneously capture samples in both RGB and IR modalities. Different cameras are used to increase style differences.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .Fig. 4 .34Fig.3. Pie chart of the number of cameras per identity. Specifically, 41.16% identities appear in two cameras and only 3.83% identities appear in six cameras.", "figure_data": "", "figure_id": "fig_1", "figure_label": "34", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. The overall framework of our method. (a) The basic framework is built upon the two-stream network, which takes pairs of cross-modality tracklets as input. A PairGAN module is applied to generate fake IR samples. Then the (Real IR, Real RGB) and (Real IR, Fake IR) pairs are respectively supervised. \"Temporal GAP\" is short for \"temporal global average pooling\". (b) Our auxiliary learning method trains primary samples and auxiliary samples jointly. A monotonically decreasing curriculum factor α is used to balance these two branches. (c) In the temporal k-reciprocal re-ranking algorithm, each tracklet is split into L groups along the temporal dimension (L = 4 as an example). Then, the k-reciprocal distances are computed between the features of sub-tracklets of query and gallery samples.", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig. 6. The distribution of distances and features of the initial model, baseline and our AuxNet on the test set of BUPTCampus. (a-c) The distributions of the intra-class distances (in red ) and inter-class distances (in blue ). ∆µ is the difference between the means of the two types of distances. Larger ∆µ represents stronger discriminative ability.(d-e) Visualization of the corresponding feature space by UMAP[26]. The samples with the same color are from the same ID. The \"cross\" and \"dot\" markers denote the RGB and IR modalities, respectively.", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "(a) shows the initial distance distributions of the intra-class and inter-class pairs, in which features are extracted by the network without training on BUPTCampus.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .7Fig. 7. Visualization of sampled real RGB, generated fake IR and real IR images. Same ID for the same column.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Fig. 8 .8Fig.8. The distribution of real RGB (blue \"cross\"), fake IR (red \"triangle\") and real IR (brown \"dot\") samples visualized by UMAP[26].", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 6 (6Fig.6(b) and (c) visualize the distance distributions obtained by baseline and AuxNet respectively. It is shown that out method can further separate the intra-class and inter-class distances compared with baseline, which has a larger difference ∆µ of the means of the two distributions. In Fig.6(d-f), we plot the feature embeddings in the 2D feature space for visualization using UMAP[26]. The results show that our AuxNet can better distinguish difficult negative samples (marked by red dotted ellipses), and greatly narrows the gap between the two modality samples with the same identity. PairGAN. Fig.7visualizes the sampled real RGB, generated fake IR by PairGAN and corresponding real IR images. The fake IR images preserve the detailed texture cues from the RGB modality, and have the similar color style with the IR modality. Therefore, it can help alleviate the differences between the two modalities and learn modality-invariant embeddings. We further visualize the distribution of samples of these three modalities in Fig.8. It can be observed that compared to real RGB samples, the fake IR samples have a closer distribution to real IR samples. Ranking List. Fig.9compares the ranking lists of baseline and our AuxNet in both \"visible-to-infrared\" and \"infrared-to-", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 9 .9Fig. 9. Sampled retrieval results of our baseline and proposed AuxNet on the BUPTCampus. For one query, the top-5 ranking samples are visualized. The right samples are marked in green , and the false samples are in red .", "figure_data": "", "figure_id": "fig_8", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "ABLATION STUDY OF PROPOSED METHODS. \"PAIRGAN\" REPRESENTS APPLYING GAN TO GENERATE FAKE IR SAMPLES. \"AUXILIARY\" REPRESENTS JOINTLY TRAINING THE PRIMARY AND AUXILIARY SETS. \"RE-RANKING\" REPRESENTS USING THE PROPOSED TEMPORAL K-RECIPROCAL RE-RANKING ALGORITHM.", "figure_data": "MethodPairGANAuxiliaryRe-RankingR1Visible to Infrared R5 R10 R20mAPR1Infrared to Visible R5 R10 R20mAPBaseline---57.0377.3482.8187.5052.7456.1377.2082.3886.2154.32✓61.5280.2786.1389.6556.3762.4579.5084.8788.3158.92✓63.2879.8884.1888.2857.1162.8480.8484.8788.8959.87✓58.5978.5283.4087.8956.6958.6277.0182.5786.9758.21✓✓63.8781.4586.5289.4558.8664.5681.9988.1290.0460.91✓✓62.7081.4585.7489.4560.1963.6079.8985.2588.3161.07✓✓64.4580.4784.7788.4861.0963.7981.9985.0689.6663.08✓✓✓65.2381.8486.1389.8462.1966.4883.1487.9390.4264.11", "figure_id": "tab_1", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "IMPACT OF SAMPLED SEQUENCE LENGTH IN TRAINING AND TESTING. THE DEFAULT SETTING IS MARKED IN GRAY .", "figure_data": "Sequence LengthVisible to Infrared Rank1 mAPInfrared to Visible Rank1 mAP111.9114.6212.8415.33552.7350.5852.4951.531057.0352.7456.1354.321558.9854.7958.8156.422057.6255.0260.1558.13", "figure_id": "tab_2", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "OF OUR METHOD WITH 9 STATE-OF-THE-ART IMAGE-BASED AND VIDEO-BASED METHODS ON OUR BUPTCAMPUS DATASET. FOR IMAGE-BASED METHODS, WE ADD A TEMPORAL AVERAGE POOLING OPERATION TO AGGREGATE FEATURES FROM SEQ LEN=10 FRAMES DURING TESTING FOR FAIR COMPARISONS. FOR MITML † , WE USE SEQ LEN=6 INSTEAD OF 10 BECAUSE IT YIELDS BETTER RESULTS IN OUR EXPERIMENTS. \"W/O AUX\" REPRESENTS THAT THE AUXILIARY SET IS NOT USED. CMC (%) AND MAP (%) ARE REPORTED.", "figure_data": "TypeMethodReferenceSeq LenR1Visible to Infrared R5 R10 R20mAPR1Infrared to Visible R5 R10 R20mAPAlignGAN [41]ICCV20191 1026.11 35.3744.81 53.8952.59 61.3059.81 68.7028.01 35.1320.52 27.9937.87 49.0746.83 57.6553.73 66.6023.25 30.32LbA [29]ICCV20211 1025.93 39.0748.15 58.7057.78 66.4866.67 75.3728.98 37.0625.00 32.0943.66 54.8551.87 65.1161.75 72.5727.07 32.93DDAG [57]ECCV20201 1033.15 46.3056.85 68.1564.63 74.4471.11 81.3033.85 43.0529.29 40.4450.93 40.8659.51 61.3868.47 69.7832.03 78.54ImageCAJ [56] AGW [58]ICCV2021 TPAMI20211 10 1 1036.67 45.00 35.74 43.7054.63 70.00 55.00 64.4463.70 77.04 62.41 73.1570.19 83.33 70.19 80.0035.26 43.61 35.45 41.1029.85 40.49 29.66 36.3853.73 66.79 50.00 60.0761.75 73.32 59.14 67.1669.03 81.16 67.72 76.4932.29 41.46 31.52 37.36MMN [63]MM20211 1038.70 43.7058.15 65.1967.78 73.5275.19 80.9338.82 42.8033.40 40.8654.48 67.1662.69 74.4472.39 80.6035.04 41.71DEEN [62]CVPR20231 1040.19 53.7065.56 74.8173.52 80.7479.44 87.5940.70 50.4335.63 49.8159.70 71.6468.28 80.9776.49 85.8238.67 48.59DART [50]CVPR20221 1046.85 53.3366.11 75.1973.15 81.6779.26 85.7445.05 50.4541.42 52.4364.37 70.5274.07 77.8080.04 83.9643.09 49.10VideoMITML † [24] AuxNet (w/o aux)CVPR2022 ours6 1050.19 62.7068.33 81.4575.74 85.7483.52 89.4546.28 60.1949.07 63.6067.91 79.8975.37 85.2581.53 88.3147.50 61.07AuxNetours1065.2381.8486.1389.8462.1966.4883.1487.9390.4264.11", "figure_id": "tab_3", "figure_label": "V", "figure_type": "table" }, { "figure_caption": "ABLATION STUDY OF DIFFERENT STRATEGIES FOR THE AUXILIARY FACTOR α. \"E\" IS THE NORMALIZED EPOCH INDEX, FROM 0 TO 1. THE DEFAULT SETTING IS MARKED IN GRAY .", "figure_data": "StrategyParametersmAPInfrared to Visible Rank1 Rank5Rank10α = 0.155.0557.8576.0581.42f ixedα = 0.357.8259.7780.0884.10α = 0.555.2656.1377.3982.18τ = 156.0057.6677.7883.14α = 1 2 exp(-τ E)τ = 356.1559.7777.7884.48τ = 554.5657.0977.2082.76ϕ = 157.7660.5477.9783.72α = (cos(πE) + ϕ)/2(1 + ϕ)ϕ = 359.8762.8480.8484.87ϕ = 555.3557.8577.7883.52", "figure_id": "tab_4", "figure_label": "VI", "figure_type": "table" }, { "figure_caption": "IMPACT OF PARAMETER SELECTION OF THE PROPOSED TEMPORAL K-RECIPROCAL ALGORITHM FOR THE \"INFRARED TO VISIBLE\" SETTING. THE DEFAULT PARAMETERS ARE k1=5, k2 = 3, λ 1 =0.8, λ 2 =0.1 AND L=2.", "figure_data": "L L L 1 2 4 8mAP 56.08 58.21 58.73 59.13Rank1 56.51 58.62 59.00 59.58Rank5 76.82 77.01 78.54 79.50Rank10 81.99 82.57 83.52 83.52k1 k1 k1 3 4 5 6 7mAP 58.06 58.02 58.21 57.88 57.79Rank1 58.43 58.62 58.62 57.85 57.66Rank5 77.59 77.39 77.01 77.01 77.97Rank10 82.95 82.38 82.57 82.76 82.95k2 k2 k2 1 2 3 4 5mAP 56.86 58.17 58.21 57.24 56.52Rank1 58.05 57.85 58.62 57.28 56.32Rank5 77.78 77.78 77.01 77.97 77.01Rank10 83.33 83.14 82.57 82.57 82.38λ1 λ1 λ1mAPRank1Rank5Rank10λ2 λ2 λ2mAPRank1Rank5Rank100.657.7957.2875.4881.42056.0856.5176.8281.990.758.0057.6676.4481.990.158.2158.6277.0182.570.858.2158.6277.0182.570.258.6859.0077.3982.950.957.9659.0077.2083.330.358.5258.8177.7882.571.057.1958.6277.7882.950.458.4557.6677.0181.80", "figure_id": "tab_6", "figure_label": "VIII", "figure_type": "table" }, { "figure_caption": "OF OUR METHOD WITH OTHER STATE-OF-THE-ART METHODS ON THE HITSZ-VCM DATASET. PLEASE NOTE THAT OUR PROPOSED \"PAIRGAN\" AND \"AUXILIARY LEARNING\" CANNOT BE UTILIZED HERE.", "figure_data": "MethodVisible to Infrared mAP Rank1Infrared to Visible mAP Rank1LbA (ICCV 2021) [29]32.3849.3030.6946.38MPANet (CVPR 2021) [47]37.8050.3235.2646.51DDAG (ECCV 2020) [57]41.5059.0339.2654.62VSD (CVPR 2021) [37]43.4557.5241.1854.53CAJ (ICCV 2021) [56]42.8160.1341.4956.59MITML (CVPR 2022) [24]47.6964.5445.3163.74Baseline (ours)36.9052.3436.2649.32+ temporal re-ranking48.7054.5845.9951.05", "figure_id": "tab_7", "figure_label": "IX", "figure_type": "table" } ]
Yunhao Du; Cheng Lei; Zhicheng Zhao; Yuan Dong; Fei Su
[ { "authors": "S Bai; P Tang; P H Torr; L J Latecki", "journal": "", "ref_id": "b0", "title": "Re-ranking via metric fusion for object retrieval and person re-identification", "year": "2019" }, { "authors": "Y Bengio; J Louradour; R Collobert; J Weston", "journal": "", "ref_id": "b1", "title": "Curriculum learning", "year": "2009" }, { "authors": "A B Chan; N Vasconcelos", "journal": "IEEE", "ref_id": "b2", "title": "Bayesian poisson regression for crowd counting", "year": "2009" }, { "authors": "Y Chen; L Wan; Z Li; Q Jing; Z Sun", "journal": "", "ref_id": "b3", "title": "Neural feature search for rgb-infrared person re-identification", "year": "2021" }, { "authors": "P Dai; R Ji; H Wang; Q Wu; Y Huang", "journal": "", "ref_id": "b4", "title": "Cross-modality person re-identification with generative adversarial training", "year": "2018" }, { "authors": "Y Du; Z Tong; J Wan; B Zhang; Y Zhao", "journal": "IEEE", "ref_id": "b5", "title": "Pami-ad: An activity detector exploiting part-attention and motion information in surveillance videos", "year": "2022" }, { "authors": "Y Du; Z Zhao; Y Song; Y Zhao; F Su; T Gong; H Meng", "journal": "IEEE Transactions on Multimedia", "ref_id": "b6", "title": "Strongsort: Make deepsort great again", "year": "2023" }, { "authors": "X Fan; W Jiang; H Luo; M Fei", "journal": "Journal of Visual Communication and Image Representation", "ref_id": "b7", "title": "Spherereid: Deep hypersphere manifold embedding for person re-identification", "year": "2019" }, { "authors": "X Fan; W Jiang; H Luo; W Mao", "journal": "The Visual Computer", "ref_id": "b8", "title": "Modality-transfer generative adversarial network and dual-level unified latent representation for visible thermal person re-identification", "year": "2022" }, { "authors": "Z Feng; J Lai; X Xie", "journal": "IEEE Transactions on Image Processing", "ref_id": "b9", "title": "Learning modality-specific representations for visible-infrared person re-identification", "year": "2019" }, { "authors": "C Fu; Y Hu; X Wu; H Shi; T Mei; R He", "journal": "", "ref_id": "b10", "title": "Cm-nas: Cross-modality neural architecture search for visible-infrared person re-identification", "year": "2021" }, { "authors": "I Goodfellow; J Pouget-Abadie; M Mirza; B Xu; D Warde-Farley; S Ozair; A Courville; Y Bengio", "journal": "Communications of the ACM", "ref_id": "b11", "title": "Generative adversarial networks", "year": "2020" }, { "authors": "X Hao; S Zhao; M Ye; J Shen", "journal": "", "ref_id": "b12", "title": "Cross-modality person reidentification via modality confusion and center aggregation", "year": "2021" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "Proceedings of the IEEE conference on computer vision and pattern recognition", "ref_id": "b13", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "S He; H Luo; P Wang; F Wang; H Li; W Jiang", "journal": "", "ref_id": "b14", "title": "Transreid: Transformer-based object re-identification", "year": "2021" }, { "authors": "A Hermans; L Beyer; B Leibe", "journal": "", "ref_id": "b15", "title": "In defense of the triplet loss for person re-identification", "year": "2017" }, { "authors": "Y Huang; Q Wu; J Xu; Y Zhong; P Zhang; Z Zhang", "journal": "IEEE Transactions on Multimedia", "ref_id": "b16", "title": "Alleviating modality bias training for infrared-visible person re-identification", "year": "2021" }, { "authors": "D P Kingma; J Ba", "journal": "", "ref_id": "b17", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "D P Kingma; M Welling", "journal": "", "ref_id": "b18", "title": "Auto-encoding variational bayes", "year": "2013" }, { "authors": "Q Leng; R Hu; C Liang; Y Wang; J Chen", "journal": "IEEE", "ref_id": "b19", "title": "Bidirectional ranking for person re-identification", "year": "2013" }, { "authors": "Q Leng; R Hu; C Liang; Y Wang; J Chen", "journal": "Multimedia Tools and Applications", "ref_id": "b20", "title": "Person re-identification with content and context re-ranking", "year": "2015" }, { "authors": "J Li; J Wang; Q Tian; W Gao; S Zhang", "journal": "", "ref_id": "b21", "title": "Global-local temporal representations for video person re-identification", "year": "2019" }, { "authors": "M Li; X Huang; Z Zhang", "journal": "", "ref_id": "b22", "title": "Self-supervised geometric features discovery via interpretable attention for vehicle re-identification and beyond", "year": "2021" }, { "authors": "X Lin; J Li; Z Ma; H Li; S Li; K Xu; G Lu; D Zhang", "journal": "", "ref_id": "b23", "title": "Learning modal-invariant and temporal-memory for video-based visible-infrared person re-identification", "year": "2022" }, { "authors": "H Liu; X Tan; X Zhou", "journal": "IEEE Transactions on Multimedia", "ref_id": "b24", "title": "Parameter sharing exploration and heterocenter triplet loss for visible-thermal person re-identification", "year": "2020" }, { "authors": "L Mcinnes; J Healy; J Melville", "journal": "", "ref_id": "b25", "title": "Umap: Uniform manifold approximation and projection for dimension reduction", "year": "2018" }, { "authors": "D T Nguyen; H G Hong; K W Kim; K R Park", "journal": "Sensors", "ref_id": "b26", "title": "Person recognition system based on a combination of body images from visible light and thermal cameras", "year": "2017" }, { "authors": "L Niu; A Veeraraghavan; A Sabharwal", "journal": "", "ref_id": "b27", "title": "Fine-grained classification using heterogeneous web data and auxiliary categories", "year": "2018" }, { "authors": "H Park; S Lee; J Lee; B Ham", "journal": "", "ref_id": "b28", "title": "Learning by aligning: Visibleinfrared person re-identification using cross-modal correspondences", "year": "2021" }, { "authors": "A Paszke; S Gross; F Massa; A Lerer; J Bradbury; G Chanan; T Killeen; Z Lin; N Gimelshein; L Antiga", "journal": "Advances in neural information processing systems", "ref_id": "b29", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "2019" }, { "authors": "A Radford; L Metz; S Chintala", "journal": "", "ref_id": "b30", "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "year": "2015" }, { "authors": "Rui Yu; Zhichao Zhou; S B Bai; X ", "journal": "BMVA Press", "ref_id": "b31", "title": "Divide and fuse: A reranking approach for person re-identification", "year": "2017-09" }, { "authors": "M S Sarfraz; A Schumann; A Eberle; R Stiefelhagen", "journal": "Proceedings of the IEEE conference on computer vision and pattern recognition", "ref_id": "b32", "title": "A posesensitive embedding for person re-identification with expanded cross neighborhood re-ranking", "year": "2018" }, { "authors": "B Shuai; A Berneshawi; X Li; D Modolo; J Tighe", "journal": "", "ref_id": "b33", "title": "Siammot: Siamese multi-object tracking", "year": "2021" }, { "authors": "Y Suh; J Wang; S Tang; T Mei; K M Lee", "journal": "", "ref_id": "b34", "title": "Part-aligned bilinear representations for person re-identification", "year": "2018" }, { "authors": "Y Sun; L Zheng; Y Yang; Q Tian; S Wang", "journal": "", "ref_id": "b35", "title": "Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline)", "year": "2018" }, { "authors": "X Tian; Z Zhang; S Lin; Y Qu; Y Xie; L Ma", "journal": "", "ref_id": "b36", "title": "Farewell to mutual information: Variational distillation for cross-modal person reidentification", "year": "2021" }, { "authors": "S Toshniwal; H Tang; L Lu; K Livescu", "journal": "", "ref_id": "b37", "title": "Multitask learning with low-level auxiliary tasks for encoder-decoder based speech recognition", "year": "2017" }, { "authors": "F Wang; W Zuo; L Lin; D Zhang; L Zhang", "journal": "", "ref_id": "b38", "title": "Joint learning of single-image and cross-image representations for person reidentification", "year": "2016" }, { "authors": "G A Wang; T Zhang; Y Yang; J Cheng; J Chang; X Liang; Z G Hou", "journal": "", "ref_id": "b39", "title": "Cross-modality paired-images generation for rgb-infrared person re-identification", "year": "2020" }, { "authors": "G Wang; T Zhang; J Cheng; S Liu; Y Yang; Z Hou", "journal": "", "ref_id": "b40", "title": "Rgb-infrared cross-modality person re-identification via joint pixel and feature alignment", "year": "2019" }, { "authors": "J Wang; Z Zhang; M Chen; Y Zhang; C Wang; B Sheng; Y Qu; Y Xie", "journal": "Springer", "ref_id": "b41", "title": "Optimal transport for label-efficient visible-infrared person re-identification", "year": "2022" }, { "authors": "Z Wang; Z Wang; Y Zheng; Y Y Chuang; S Satoh", "journal": "", "ref_id": "b42", "title": "Learning to reduce dual-level discrepancy for infrared-visible person re-identification", "year": "2019" }, { "authors": "L Wei; S Zhang; W Gao; Q Tian", "journal": "Proceedings of the IEEE conference on computer vision and pattern recognition", "ref_id": "b43", "title": "Person transfer gan to bridge domain gap for person re-identification", "year": "2018" }, { "authors": "Z Wei; X Yang; N Wang; X Gao", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b44", "title": "Flexible body partition-based adversarial learning for visible infrared person re-identification", "year": "2021" }, { "authors": "A Wu; W S Zheng; H X Yu; S Gong; J Lai", "journal": "", "ref_id": "b45", "title": "Rgb-infrared crossmodality person re-identification", "year": "2017" }, { "authors": "Q Wu; P Dai; J Chen; C W Lin; Y Wu; F Huang; B Zhong; R Ji", "journal": "", "ref_id": "b46", "title": "Discover cross-modality nuances for visible-infrared person reidentification", "year": "2021" }, { "authors": "Y Wu; Y Lin; X Dong; Y Yan; W Ouyang; Y Yang", "journal": "", "ref_id": "b47", "title": "Exploit the unknown gradually: One-shot video-based person re-identification by stepwise learning", "year": "2018" }, { "authors": "T Xiao; S Li; B Wang; L Lin; X Wang", "journal": "", "ref_id": "b48", "title": "Joint detection and identification feature learning for person search", "year": "2017" }, { "authors": "M Yang; Z Huang; P Hu; T Li; J Lv; X Peng", "journal": "", "ref_id": "b49", "title": "Learning with twin noisy labels for visible-infrared person re-identification", "year": "2022" }, { "authors": "M Ye; X Lan; Q Leng", "journal": "", "ref_id": "b50", "title": "Modality-aware collaborative learning for visible thermal person re-identification", "year": "2019" }, { "authors": "M Ye; X Lan; Q Leng; J Shen", "journal": "IEEE Transactions on Image Processing", "ref_id": "b51", "title": "Cross-modality person reidentification via modality-aware collaborative ensemble learning", "year": "2020" }, { "authors": "M Ye; X Lan; J Li; P Yuen", "journal": "", "ref_id": "b52", "title": "Hierarchical discriminative learning for visible thermal person re-identification", "year": "2018" }, { "authors": "M Ye; X Lan; Z Wang; P C Yuen", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b53", "title": "Bi-directional center-constrained top-ranking for visible thermal person re-identification", "year": "2019" }, { "authors": "M Ye; C Liang; Y Yu; Z Wang; Q Leng; C Xiao; J Chen; R Hu", "journal": "IEEE Transactions on Multimedia", "ref_id": "b54", "title": "Person reidentification via ranking aggregation of similarity pulling and dissimilarity pushing", "year": "2016" }, { "authors": "M Ye; W Ruan; B Du; M Z Shou", "journal": "", "ref_id": "b55", "title": "Channel augmented joint learning for visible-infrared recognition", "year": "2021" }, { "authors": "M Ye; J Shen; J Crandall; D Shao; L Luo; J ", "journal": "Springer", "ref_id": "b56", "title": "Dynamic dual-attentive aggregation learning for visible-infrared person reidentification", "year": "2020" }, { "authors": "M Ye; J Shen; G Lin; T Xiang; L Shao; S C Hoi", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b57", "title": "Deep learning for person re-identification: A survey and outlook", "year": "2021" }, { "authors": "M Ye; Z Wang; X Lan; P C Yuen", "journal": "", "ref_id": "b58", "title": "Visible thermal person reidentification via dual-constrained top-ranking", "year": "2018" }, { "authors": "X Zhai; A Oliver; A Kolesnikov; L Beyer", "journal": "", "ref_id": "b59", "title": "S4l: Self-supervised semi-supervised learning", "year": "2019" }, { "authors": "Y Zhang; S Zhao; Y Kang; J Shen", "journal": "Springer", "ref_id": "b60", "title": "Modality synergy complement learning with cascaded aggregation for visible-infrared person re-identification", "year": "2022" }, { "authors": "Y Zhang; H Wang", "journal": "", "ref_id": "b61", "title": "Diverse embedding expansion network and low-light cross-modality benchmark for visible-infrared person reidentification", "year": "2023" }, { "authors": "Y Zhang; Y Yan; Y Lu; H Wang", "journal": "", "ref_id": "b62", "title": "Towards a unified middle modality learning for visible-infrared person re-identification", "year": "2021" }, { "authors": "L Zheng; Z Bie; Y Sun; J Wang; C Su; S Wang; Q Tian", "journal": "Springer", "ref_id": "b63", "title": "Mars: A video benchmark for large-scale person re-identification", "year": "2016" }, { "authors": "L Zheng; L Shen; L Tian; S Wang; J Wang; Q Tian", "journal": "", "ref_id": "b64", "title": "Scalable person re-identification: A benchmark", "year": "2015" }, { "authors": "L Zheng; S Wang; L Tian; F He; Z Liu; Q Tian", "journal": "", "ref_id": "b65", "title": "Query-adaptive late fusion for image search and person re-identification", "year": "2015" }, { "authors": "L Zheng; H Zhang; S Sun; M Chandraker; Y Yang; Q Tian", "journal": "", "ref_id": "b66", "title": "Person re-identification in the wild", "year": "2017" }, { "authors": "Z Zheng; L Zheng; Y Yang", "journal": "ACM transactions on multimedia computing, communications, and applications (TOMM)", "ref_id": "b67", "title": "A discriminatively learned cnn embedding for person reidentification", "year": "2017" }, { "authors": "Z Zheng; L Zheng; Y Yang", "journal": "", "ref_id": "b68", "title": "Unlabeled samples generated by gan improve the person re-identification baseline in vitro", "year": "2017" }, { "authors": "X Zhong; T Lu; W Huang; M Ye; X Jia; C W Lin", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b69", "title": "Grayscale enhancement colorization network for visible-infrared person reidentification", "year": "2021" }, { "authors": "Z Zhong; L Zheng; D Cao; S Li", "journal": "", "ref_id": "b70", "title": "Re-ranking person re-identification with k-reciprocal encoding", "year": "2017" }, { "authors": "J Y Zhu; T Park; P Isola; A A Efros", "journal": "", "ref_id": "b71", "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "year": "2017" }, { "authors": "J Zhuo; Z Chen; J Lai; G Wang", "journal": "IEEE", "ref_id": "b72", "title": "Occluded person re-identification", "year": "2018" } ]
[ { "formula_coordinates": [ 4, 359.59, 622.81, 203.45, 30.32 ], "formula_id": "formula_0", "formula_text": "L 1 ce = - 1 N N i=1 log( exp(z i,i ) C k=1 exp(z i,k ) ),(1)" }, { "formula_coordinates": [ 4, 316.91, 709.12, 246.13, 38.91 ], "formula_id": "formula_1", "formula_text": "L 1 triplet = N i=1 [m + max ∀yi=yj D(x i , x j ) -min ∀yi̸ =y k D(x i , x k )] + ,(2)" }, { "formula_coordinates": [ 5, 132.21, 533.51, 167.82, 12.69 ], "formula_id": "formula_2", "formula_text": "L 1 = L 1 ce + L 1 triplet .(3)" }, { "formula_coordinates": [ 5, 103.74, 707.63, 196.28, 9.65 ], "formula_id": "formula_3", "formula_text": "L recon = ||G rgb→ir (I rgb ) -I ir || 1 .(4)" }, { "formula_coordinates": [ 5, 343.26, 498.74, 219.77, 24.6 ], "formula_id": "formula_4", "formula_text": "L cycle =||G ir→rgb (G rgb→ir (I rgb )) -I rgb || 1 + ||G rgb→ir (G ir→rgb (I ir )) -I ir || 1 ,(5)" }, { "formula_coordinates": [ 5, 395.22, 634.34, 167.82, 12.69 ], "formula_id": "formula_5", "formula_text": "L 2 = L 2 ce + L 2 triplet .(6)" }, { "formula_coordinates": [ 6, 89.83, 194.77, 206.32, 9.65 ], "formula_id": "formula_6", "formula_text": "L total = -α)L primary + αL auxiliary , (7" }, { "formula_coordinates": [ 6, 296.15, 195.09, 3.87, 8.64 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 6, 104.33, 372.91, 195.69, 8.96 ], "formula_id": "formula_8", "formula_text": "α(E) = (cos(πE) + ϕ)/2(1 + ϕ),(8)" }, { "formula_coordinates": [ 6, 357.18, 147.99, 205.86, 23.22 ], "formula_id": "formula_9", "formula_text": "d jacc (i, j) = 1 - |R(q i , k) ∩ R(g j , k)| |R(q i , k) ∪ R(g j , k)| ,(11)" }, { "formula_coordinates": [ 6, 325.32, 281.5, 237.71, 9.65 ], "formula_id": "formula_10", "formula_text": "d rerank (i, j) = λ 1 d f eat (i, j) + (1 -λ 1 )d jacc (i, j),(12)" }, { "formula_coordinates": [ 6, 362.91, 519.83, 200.12, 30.55 ], "formula_id": "formula_11", "formula_text": "d cross (i, j) = L-1 l=0 d jacc (q l i , g L-1-l j ),(13)" }, { "formula_coordinates": [ 6, 332.21, 600.53, 230.82, 9.65 ], "formula_id": "formula_12", "formula_text": "d rerank = λ 1 d f eat + (1 -λ 1 )d jacc + λ 2 d cross ,(14)" } ]
2023-11-27
[ { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b32", "b32" ], "table_ref": [], "text": "With the ever growing interest in 3D scene understanding for autonomous vehicles, semantic segmentation for Li-DAR point clouds has also risen in popularity. To accurately and robustly learn the dense prediction task of generating per point class labels, a high volume of data is not only valuable but required. However manually labeling outdoor LiDAR scenes for semantic segmentation is both time consuming and expensive for large scale datasets.\nThere are two recently explored paths in the literature for reducing the labeling cost of outdoor LiDAR scenes: (i) by employing weak-supervision, where all frames have incomplete labels (e.g. by using line-scribbles [33]) and (ii) by employing semi-supervision, where a subset of frames are labeled and the rest remain completely unlabeled [16].\nCommonly, LiDAR semantic segmentation models suffer from error prone boundary estimation between classes, as well as high false negative rates on both small objects and distant sparse regions. This is caused by the sparsity of LiDAR point clouds which severely reduces the number of points that fall on such regions to form an understandable and well separable geometry. As expected, these errors are further amplified when dealing with incomplete supervision, especially with scribble labels that completely forgo labeling boundaries. It can even be argued that such hard cases potentially need more representation within the dataset for correct and robust learning, something that clearly lacks under data-efficient settings.\nThese errors are severely reduced when operating on a denser representation of a scene (see Fig. 1 -top). Luckily, LiDAR sensors are commonly paired with cameras that are not only cheaper but also provide a dense signal in the form of an RGB image that allows better separable boundaries (especially with the aid of RGB color channels), as well as orders of magnitude more pixels than points on small objects and distant regions. It is for this reason that all autonomous vehicles are equipped with a high resolution camera facing the front of the car to provide a denser and more complete understanding of the critical ego-vehicle path.\nOur goal in this work is to leverage this high resolution image within our 3D pipeline to target the common weaknesses of LiDAR semantic segmentation models trained under incomplete supervision (weak labels). However we face two major challenges: (i) we need to retain our low annotation budget to have a scalable solution, therefore we cannot use additional annotated datasets or pretrained models in our setup; (ii) we need to tackle the issue of the horizontal field-of-view (FOV) mismatch between a LiDAR sensor and camera, where only a subset of points that fall onto the camera FOV have valid correspondence.\nTo this extent, we propose the Image-Guidance network (IGNet) that comprises of two core modules: (M1) domain adaptive image feature distillation that allows us to keep our low annotation budget and (M2) one-way contrastive learning that combats the FOV mismatch by leveraging image features to supervise out-of-image points. Throughout this work, we strictly associate the 2D domain with RGB images and 3D with LiDAR point clouds. M1: Firstly, we train a 2D semantic segmentation model to generate per pixel high level features that better capture shape and context for sparse regions. By training on synthetic data, we avoid introducing any additional annotation requirements. We establish point-to-pixel correspondence between the LiDAR point cloud and the camera image (Fig. 1 -bottom), and distill the information from the generated features onto a 3D network via an auxiliary loss.\nHowever, training on synthetic data yields yet another challenge: There exists a domain gap between synthetic images and real images that hinder performance in 2D. To further improve the quality of our image features, we propose using a domain adaptation (DA) pipeline to align our source domain onto the target. We further supervise the DA task via weak image labels generated by projecting the LiDAR labels onto the corresponding image. M2: Next, we tackle the issue of the horizontal FOV mismatch between the camera and the LiDAR sensor. As our image-guidance module requires valid point-pixel correspondences, the auxiliary supervision remains limited to points that fall onto the image. To extend the supervision to points outside of the image, we propose using a one-way contrastive loss guided by a teacher model, allowing points that fall within the image to guide points that fall outside.\nHere we observe that the number of pixel-to-outsidepoint-pairings remains limited as each LiDAR scan has a fixed associated image. This reduces the effect of the contrastive learning, especially since this single image alone often contains zero to a few object instances of each class. To combat this, we introduce a simple mixing strategy called FOVMix, where we cut and paste an image with its corresponding points from one scene onto another. With FOVMix, we are not only able to generate new pixel-point pairings to aid the contrastive learning but also increase the variability within each mini-batches. To summarize:\n• We propose using a synthetically trained 2D semantic segmentation model to guide the 3D network's feature space in order to improve boundary, distant region and sparse object segmentation. • We employ weakly-supervised domain adaptation to further align the 2D features with our dataset. • We extend the supervision from the image-guidance network to points out of the camera field-of-view via a one-way supervised contrastive loss. • We propose a new mixing strategy called FOVMix to introduce additional variety into the dataset along with additional point-pixel pairings to extract further performance from our contrastive loss.\nWe achieve state-of-the-art results for weakly-supervised semantic segmentation on ScribbleKITTI [33]. We further show that IGNet can also be utilized for semi-supervised Li-DAR segmentation to yield state-of-the-art results on both ScribbleKITTI and SemanticKITTI [2].\nIt should be noted that our proposed modules are only required during training, thus the performance boost comes without any additional computational or memory burden compared to the baseline 3D model during inference. Finally, as only synthetic data is required, we also do not introduce any additional annotation costs." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b14", "b30", "b18", "b28", "b4", "b11", "b29", "b1", "b32", "b11", "b32", "b29", "b1", "b7", "b16", "b0", "b17", "b20", "b43", "b5", "b9", "b12", "b6", "b8", "b6", "b2", "b16" ], "table_ref": [], "text": "Data Efficient LiDAR Semantic Segmentation: LiDAR semantic segmentation research has heavily focused on understanding how to best process the unordered data structure, with earlier focus on direct point based neural networks [15,24,25,31,34] having later shifted to sparse convolutional networks [9, 19,29,35,43]. As architectures mature, we observe another developing area of interest: data efficiency within LiDAR semantic segmentation.\nAs known, the dense prediction task requires a largescale annotated dataset, which is especially difficult and expensive to obtain for LiDAR point clouds [2]. Recent work therefore investigate two paths that aim to reduce this associated labeling cost: (i) weakly-supervised learning, where every frame is partially labeled, and (ii) semi-supervised learning, where only a subset of frames are labeled and the remaining stay completely unlabeled. However such approaches always come at the cost of performance, as reducing the number of labels within a dataset reduces the supervision provided to the model. Current popular literary work that deal with incomplete labels aim to extend the supervision to unlabeled points by (i) self-supervised training [5, 12,45] where a model is trained on self-generated pseudo-labels or (ii) relying on a guidance network to generate on the fly targets (e.g. mean teacher [30,32,33]).\nFor self-supervised training, CBST [45] proposes to use class-wise thresholding for self-training to reduce confirmation bias. Extending CBST, DARS [12] proposes to redistribute biased pseudo labels for semi-supervised training.\nFor 3D in particular, ScribbleKITTI [33] provides the first realistic benchmark for weakly supervised LiDAR semantic segmentation by introducing the scribble-annotated dataset. In their work, to reduce the gap to fully supervised training, they propose the SSLSS pipeline where they utilize a mean teacher setup [30] to stretch the supervision to unlabeled points, and extend CBST with a range component to deal with the increased sparsity of LiDAR point clouds. For works on indoor point clouds, PSD [38] utilizes similar consistency checks to align clean and perturbed outputs of unlabeled points. WS3D [20] utilizes region-level boundary awareness and instance discrimination to improve indoor and outdoor 3D semantic segmentation with simulated weak labels. Furthermore for semi-supervised learning, DiAL [32] uses a simple MT setup, GPC [16] proposes using a pseudo-label guided point contrastive loss, SSPC [8] utilizes self-training and LaserMix [17] uses a mixing operation to bring supervision to unlabeled frames. CPS [7] utilizes a Siamese structure to induce cross supervision. Multi-Modality with LiDAR and Image: As mentioned, the additional information available in the corresponding RGB image does provide meaningful advantages that can improve LiDAR perception. Yet the task of incorporating this information within a robust pipeline is not trivial.\nFusion has been studied for a number of LiDAR based 3D perception tasks in a supervised and weakly-supervised manner [1,4,18,21,41,42]. For LiDAR semantic segmentation PMF [44] and LIF-Seg [40] fuse the information from streams that process each modality individually to obtain higher information yielding features. However such approaches not only require image information during inference but also have linearly increasing memory and computation cost. 2DPASS [36] overcomes this by only using a one way information flow during training. Still, training the image stream on only LiDAR projected labels suffer heavily under incomplete annotations where it hinders performance instead of improving it. Sautier et al. [28] proposes a more general approach of self-supervised pretraining through the alignment of pixel-and point regions that still remains susceptible to forgetting (at a reduced scale).\nMix-Augmentation: Mixing operations have been very successful in increasing variability in the dataset and producing significant performance boosts for many tasks [6, 10,13,22,26,37,39]. CutMix [37] mixes portions of the input and output of one sample image with another. Mix-Match [3] applies the same mixing operation to labeled and unlabeled frames in a semi-supervised setting while generating labels via guessing and sharpening for unlabeled parts to provide supervision. Specifically for semi-supervised learning on LiDAR point clouds, LaserMix [17] aims to introduce variability through cylindrical and range-view partitioning and mixing." }, { "figure_ref": [], "heading": "Data Efficient LiDAR Segmentation", "publication_ref": [], "table_ref": [], "text": "Data efficient LiDAR semantic segmentation aims to reduce the labeling cost associated with the dense prediction task by employing (i) weak supervision, where all frames have incomplete labels (e.g. by using scribble annotations), or (ii) semi supervision, where some frames have labels and others remain unlabeled. In either setting, naively training a model on available labeled points results in a considerable performance drop as only a small subset of points provide supervision. Specifically, we observe an amplified error rate caused by (i) weak boundary estimation between classes and (ii) misclassification of small objects and distant sparse regions, as LiDAR's increased sparsity by range causes a severe reduction in the number of available points on an object to form an understandable geometry." }, { "figure_ref": [ "fig_1" ], "heading": "A Baseline Approach: Mean Teacher", "publication_ref": [ "b32", "b29", "b29" ], "table_ref": [], "text": "As a first step in reducing the performance gap to fully supervised training we employ a generalized approach to utilize all points within the dataset. In specific, to extend the supervision to unlabeled points, following Unal et al. [33], we construct a mean teacher (MT) framework [30], where a student network is trained using a supervised loss H (e.g. cross-entropy) and a teacher network is formed by the exponential moving average (EMA) of the student's weights θ (for time step t):\nθ EMA t = αθ EMA t-1 + (1 -α)θ t(1)\nThe given update rule yields a teacher model that is a better and more robust predictor [23,30]. To exploit this behaviour, we apply a consistency loss between the teacher and the student to align its outputs to the more accurate predictions, e.g. by minimizing the Kullback-Leibler divergence to the softmax outputs. Formally, for all points x, the loss function can be redefined as:\nL = H(ŷ, y) + 1 U (x) KL(ŷ || ŷEMA )(2)\nwith ŷ and ŷEMA denoting the predictions of the student and teacher models, y the ground truth labels and U denoting (blue) The available weak point labels are then used to generate weak image labels that supervise a 2D network alongside synthetic data.\nWe utilize a mean teacher framework to adapt from the synthetic domain to the real domain. (green) We train a 3D model using a mean teacher framework to utilize both weak annotations and unlabeled points. (red) We copy and freeze the trained 2D student model to generate per pixel features that act as a guidance for the 3D student features via an auxiliary loss.\nthe set of points without ground truth labels. An illustration of the MT pipeline can be seen in Fig. 2 -green. While a mean teacher framework does allow us to utilize the entire dataset within our training pipeline, due to the lack of direct supervision, similar to the student, the teacher's predictions remain uncertain and error prone for points that lie on class boundaries or for sparsely represented classes (e.g. volumetrically small objects or distant regions), especially when trained on weak scribble labels that completely forgo labeling any boundary points." }, { "figure_ref": [ "fig_1" ], "heading": "Image Guidance via Feature Distillation", "publication_ref": [], "table_ref": [], "text": "To target these weaknesses we propose using image feature distillation from a trained 2D semantic segmentation model. But before we dive deep into the details, it is important to establish motivation.\nRGB images provide a much denser representation of a scene compared to LiDAR point clouds. This increased density along with the available color channels allow easier distinction of both class boundaries as well as small objects and distant regions. 2D semantic segmentation models can therefore learn better separable and richer features for such pixels. Following this observation, we propose introducing an image guidance (IG) network to exploit the mature features of a trained 2D semantic segmentation model.\nFirstly, we apply a forward pass to the camera image using a synthetically-trained semantic segmentation model to extract a high level feature representation (θ IG : [0, 255] 3 → f IG ∈ R d ). It should be noted that we opt to use synthetic data to avoid introducing any additional annotation burden as the collection of new labeled samples can be easily automated. Using available intrinsic and extrinsic camera matrices K and [R|t] respectively, we project the 3D points cloud in homogeneous coordinates x hom onto the rectified camera coordinates following x T rec = K[R|t] x T hom and extract point to pixel mappings m :\nx rec → (k, l) with k = ⌊x (0) rec /x (2) rec ⌋ and ⌊l = x (1) rec /x\n(2) rec ⌋. A point to pixel correspondence is considered valid if the pixel (k, l) falls within the image.\nWe extend our 3D model with an auxiliary head that maps the final layer features to the image feature dimension d. During training, we introduce a new consistency term between the student and the IG teacher that is applied to all points that have a valid pixel correspondence. Formally, we restate the loss function to include image-guidance as:\nL =H(ŷ, y) + 1 U (x) KL(ŷ || ŷEMA ) + L IG with L IG = 1 I (x, m(x)) KL(sm(f ) || sm(f IG ))(3)\nwith I denoting the set of points with valid pixel correspondence, sm denoting the softmax operation, f , f IG ∈ R N ′ ×C denoting the feature representations of the 3D auxiliary head and IG decoders respectively. With the addition of the auxiliary loss, the 3D network aims to mimic the more mature representation of the 2D network for points with pixel correspondences. In other words, we introduce a new teacher model, where boundary points along with small and distant objects more richly defined due to the denser representation, to further and better guide the student on unlabeled points. An illustration of the proposed module can be seen in Fig. 2 -red.\nIt should be noted that the IG network is only required during training and can be completely removed for inference alongside the auxiliary head, causing no additional memory requirements or time costs to the overall 3D model." }, { "figure_ref": [ "fig_1" ], "heading": "2D Weakly-Supervised Domain Adaption", "publication_ref": [ "b29" ], "table_ref": [], "text": "As mentioned before, in order to train θ IG for semantic segmentation, we resort to synthetic data. It has the desirable property that even dense annotations can be automatically generated so that no additional labeling cost is introduced. However, a model trained on synthetic source data (I s , S s ), usually experiences a performance drop when applied to real-world target images I t due to the domain gap.\nTo tackle this, we propose employing a domain adaptation pipeline to improve the quality of the extracted features and better align with the data from our real-world training set. Following current literature [14], we reestablish a mean teacher framework [30] and use the teacher model to generate pseudo labels P t for the target domain images by freezing the unlabeled image predictions. We train the 2D network with a linear classification layer γ not only on the synthetic image-label pairings (I s , S s ) but also on the target images with pseudo labels (I t , P t ). Formally, the loss for the 2D model can be defined as:\nL = L S + L DA with L S = H(γ(θ IG (I s )), S s ) and L DA = H(γ(θ IG (I t )), P t ) (4)\nFurthermore, in contrast to common unsupervised domain adaptation, we have access to LiDAR scribble annotations on the target domain. Even though these only provide sparse and possibly noisy supervision (due to projection errors), they can be an important anchor for the adaptation to the target domain. In order to incorporate this additional information into our pipeline, we augment the EMA teacher pseudo-label P t with projected scribble labels P t (m(x)) ← y.\nWe then extend our domain adaptive loss L DA from Eq. 4 to increase the importance of the projected labels P t (m(x)) via a weight vector # » λ p :\nL DA = # » λ p H(γ(θ IG (I t )), P t )(5)\nwith # » λ p = λ p for pixels with valid point mapping and 1 otherwise. An illustration of the proposed weakly-supervised domain adaptation pipeline can be seen in Fig. 2 -blue.\nFinally, to form the image guidance model θ IG , we copy and freeze the 2D student model (following unsupervised domain adaptation convention [14]) without the linear classifier and use its generated features to guide the 3D student model during training." }, { "figure_ref": [ "fig_2", "fig_4" ], "heading": "Extending the Supervision Beyond the Image", "publication_ref": [], "table_ref": [], "text": "With image-guidance (Eq. 3) the information distillation from the mature 2D features to the 3D pipeline is limited by the availability of point-pixel correspondences. For many cases, we are limited to a front facing camera, so there exists a big mismatch between the horizontal FOV of the two sensors. Under such a setup, the set of all points with valid pixel correspondence (I) is much smaller than the set of all points without a valid correspondence (O = I ∩ P ), i.e. |I| < |O|. In other words, the lack 360 • coverage for the camera means that points with pixel correspondence only make up a small portion of the LiDAR point cloud.\nTo be able to guide points outside of the image using the 2D domain adapted features, we introduce an extension to the image-guidance loss with a one-way supervised contrastive loss (CL).\nLet I (c) ⊆ I and O (c) ⊆ O define two sets of points inside and outside of the image respectively with associated class c = argmax ŷEMA , given by the teacher's prediction. Formally, we define the one-way supervised contrastive loss as:\nL CL = c o∈O (c) -log   1 |O (c) | i∈I (c) exp(f o • f IG,i /τ ) i ′ ∈I exp(f o • f IG,i ′ /τ )  (\n6) with τ denoting the temperature. The total loss can then be formulated as:\nL =H(ŷ, y) + 1 U (x) KL(ŷ || ŷEMA ) + L IG + λL CL (7)\nwith λ denoting the scale hyperparameter.\nAs illustrated in Fig. 3, the loss extension aims to apply a pull force to all points towards pixels of the same category while also applying a push to all points away from pixels of a different class. We therefore align the features of points outside of the image with the features of the 2D image-guidance network. To accompany this, we further take all points that are within the image FOV of sample A, and paste them onto sample B while removing all points of B that were in the same region. An illustration of FOVMix can be seen in Fig. 4. Formally, we define the mixing operation as:" }, { "figure_ref": [], "heading": "FOVMix", "publication_ref": [], "table_ref": [], "text": "x = [M AA ⊙ x A , (1 -M AA ) ⊙ x B ] ỹ = [M AB ⊙ y A , (1 -M AB ) ⊙ y B ] Ĩ = I A (8)\nM AB , M AA ∈ {0, 1} N denote the binary masks that yield the points within the image FOV given the intrinsic projection matrix A and extrinsic projection matrices A and B respectively, ⊙ and [,] denoting a dot product for masking and concatenation operations. Thus, FOVMix does not depend a specific sensor/setting, but only relies on the availability of point to pixel correspondences, which is expected for systems with both a LiDAR sensor and camera.\nFOVMix is a simple operation that accomplishes two feats: (i) it increases the effectiveness of the one-way contrastive loss by introducing additional pairings of points inside-outside of the image, (ii) it increases the richness of the data within each mini-batch. While FOVMix introduces noise along the boundaries of the image FOV similar to other mixing methods commonly used in dense vision tasks, the increased diversity and richness of each mini-batch is a worthy trade-off against the introduced noise." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b1", "b32" ], "table_ref": [], "text": "Implementation details: We use Cylinder3D [43] as a baseline 3D model. For the mean teacher, we follow convention and set the update hyperparameter α = 0.999 [32]. For the domain adaptive 2D pipeline we follow DAFormer [14]. We heuristically balance the losses by setting λ = 0.001 and λ p = 10. For semi-supervised, we restrict set A in FOVMix to labeled frames to ensure we have direct supervision in all samples and do additional rotation augmentation before the FOVMix operations to increase variability. Datasets: We run our experiments on the Scrib-bleKITTI [33] dataset that provides realistic weak labels for LiDAR semantic segmentation in the form of scribbles. ScribbleKITTI is built on SemanticKITTI [2, 11], the most popular large-scale outdoor-scene dataset for LiDAR semantic segmentation, shares the same valid-set. The weak labels only provide annotations to 8% of the point count and completely forgo class boundaries. Thus, compared to dense annotations, labeling times are reduced by 10 fold.\nFor the 2D syntetic training, we use the GTA-V dataset which contains 24966 synthetic images with pixel level semantic annotation. The images are generated using a modded version of the open-world video game Grand Theft Auto 5." }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b32", "b32", "b16" ], "table_ref": [], "text": "Weakly-Supervised LiDAR Segmentation: We report the performance of our image-guidance network (IGNet) trained with scribble-supervision in Tab. 1. As seen, IGNet outperforms previous SOTA, showing improvements across the board for all classes and reaching 96.4% relative performance when compared to fully supervised training while only using 8% labeled points. In specific, we observe large gains for small object categories such as bicycle and motorcycle when compared to the previous SOTA SSLSS [33].\nIt should be noted that, in contrast to SSLSS, IGNet does not require self-training. Therefore the training times are considerably reduced (from 5 days to 1 -including the 2D training -using 8 Nvidia RTX2080Ti's). Still, to further push performance, we can IGNet++. Here, we replace the Cylinder3D backbone of SSLSS with IGNet and therefore employ the same class-range-balanced self-training scheme on top of our image guidance to achieve 63% mIoU, i.e. 98% relative performance compared to fully supervised. Semi-Supervised LiDAR Segmentation: We also show that IGNet can be used for all data-efficient LiDAR semantic segmentation settings. In particular, we report results for (i) semi-supervised training using SemanticKITTI [2] and (ii) semi-and weakly-supervised training on Scrib-bleKITTI [33], where we carry experiments on a semisupervised setting while training with a weakly-supervised Table 3. Ablation study where starting from the baseline Cylinder3D, we one-by-one introduce the mean teacher (MT), as well as our proposed image guidance (IG), contrastive loss (CL) and FOVMix modules. Alongside the mIoU, we also report the relative mIoU (rel) compared to the fully supervised baseline.\ndataset. We follow Kong et al. [17] and generate a semisupervised dataset by uniformly sampling frames. As seen in Tab. 2, IGNet outperforms previous SOTA's by a considerable margin on almost all cases. Specifically, as expected, we see greater margins of improvement in the ScribbleKITTI semi-supervised benchmark since the image-guidance can be more effectively utilized to learn boundary information despite the lack of any such labels. We also report a direct comparison to the baseline Cylin-der3D model where IGNet shows great absolute mIoU improvements of 4.1%-9.7% while introducing no additional memory or computational requirements during inference." }, { "figure_ref": [ "fig_6" ], "heading": "Ablation Studies", "publication_ref": [ "b32" ], "table_ref": [], "text": "We conduct ablation studies on the ScribbleKITTI [33] dataset, where alongside the mIoU, we also report the relative performance of our model compared to the baseline Cylinder3D [43] trained on densely annotated labels. Effects of Network Components: We first investigate the effects our proposed components. Starting from a baseline model, we introduce each module one by one, reporting the mIoU and relative performances in Tab. 3. As seen each component provides a considerable performance gain over the baseline. Specifically we see a 2% gain when we introduce our domain adapted image-guidance network, and a further 0.2% when we introduce our contrastive loss/FOVMix individually. When utilizing both modules, we see that the constrastive loss can benefit from additional point pairings established via the FOVMix operation, which reflects in the gain of 0.8% (as opposed to 0.3%). Is Domain Adaptation Necessary? We further investigate the necessity of domain adaptation for our image-guidance network. Starting from a mean teacher framework, we compare the performance of our 3D model when guided by the DAFormer model [14] trained on (i) weak labels that we generate by projecting 3D scribbles onto the image, and (ii) the synthetically generated GTA-V dataset [27], as well as the complete DAFormer pipeline (model + DA) with (iii) GTA-V → ScribbleKITTI, and (iv) GTA-V → Scrib-bleKITTI with additional projected weak supervision. The results are shown in Tab. 4 which emphasize the importance of DA and the usefulness of the weak supervision. Where do the Improvements Come From? Our goal when using image features to guide our 3D model is to ex- ploit the better representation capabilities of 2D semantic segmentation models trained on denser representations for (i) border points, where channels can provide finer separation compared to noisy LiDAR measurements, (ii) small object and sparsely represented regions, where the pixel count remains considerably higher compared to the LiDAR point count. Finally, we conduct an ablation study to investigate if this behaviour can be observed in the model accuracy after introducing the 2D image-guidance module. In Tab. 5, we isolate the effects of our image guidance module by directly comparing to the mean teacher. Firstly, we show that the introduction of image-guidance does boost the border accuracy significantly (+3.5%). Here, we classify points to be on a border if any of its closes N = 16 neighbors in 3D space do not share the same class. Second, we observe that IGNet obtains a considerably better performance (+6.6%) on small objects (pedestrians and two-wheelers) compared to the gain in larger objects (+1.5% for four-wheelers). Lastly, when comparing accuracy changes by range, sparsely represented distant regions beyond 25m of range show an improvement of +2.0% when compared to the MT baseline, while close regions only see marginal gains of +0.4%. Here we conclude that image-guidance can indeed compensate for the common weaknesses seen in LiDAR segmentation, especially under weak supervision.\nApart from quantitative results, we also showcase examples from the valid-set illustrating this effect in Fig. 5. Here we show that IGNet can (top) finely determine object boundaries, (middle) better segment small objects (Cylin-der3D and SSLSS misidentify some bicyclist points), and (bottom) improve recognition for sparsely represented regions (IGNet correctly segments all three sparse objects)." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work we tackle common weaknesses of data efficient LiDAR semantic segmentation by distilling high level feature information from a synthetically trained 2D semantic segmentation network. We reduce the domain gap between synthetic and real data by employing weakly supervised DA. We extend the supervision from image pixels to out-of-FOV points via a one way contrastive loss and construct new pairings via FOVMix. With our proposed IGNet, we achieve better boundary estimation, increase performance at distant, sparse regions and heavily improve small class segmentation. We achieve SOTA results in both weakly-and semi-supervised 3D semantic segmentation. Limitations: Compared to the baseline Cylinder3D, IGNet requires roughly twice the training time due to its two stage approach. Furthermore, the feature distillation module requires paired RGB images with LiDAR scans. While all current LiDAR equipped autonomous systems have an accompanying camera setup, our method still relies on the fact that the sensors need to be calibrated for valid pairings." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgements: This work was funded by Toyota Motor Europe via the research project TRACE Zurich." } ]
As 3D perception problems grow in popularity and the need for large-scale labeled datasets for LiDAR semantic segmentation increase, new methods arise that aim to reduce the necessity for dense annotations by employing weakly-supervised training. However these methods continue to show weak boundary estimation and high false negative rates for small objects and distant sparse regions. We argue that such weaknesses can be compensated by using RGB images which provide a denser representation of the scene. We propose an image-guidance network (IGNet) which builds upon the idea of distilling high level feature information from a domain adapted synthetically trained 2D semantic segmentation network. We further utilize a oneway contrastive learning scheme alongside a novel mixing strategy called FOVMix, to combat the horizontal field-ofview mismatch between the two sensors and enhance the effects of image guidance. IGNet achieves state-of-the-art results for weakly-supervised LiDAR semantic segmentation on ScribbleKITTI, boasting up to 98% relative performance to fully supervised training with only 8% labeled points, while introducing no additional annotation burden or computational/memory cost during inference. Furthermore, we show that our contributions also prove effective for semisupervised training, where IGNet claims state-of-the-art results on both ScribbleKITTI and SemanticKITTI.
2D Feature Distillation for Weakly-and Semi-Supervised 3D Semantic Segmentation
[ { "figure_caption": "Figure 1 .1Figure 1. While boundaries and sparse distant regions are difficult to determine in 3D, 2D models can leverage the denser image pixels for finer estimation. With image-guidance via feature alignment, points with pixel correspondences aim to mimic the 2D model features via an auxiliary loss.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Pipeline for image feature distillation. We first establish point-pixel correspondences between the LiDAR point cloud and image.(blue) The available weak point labels are then used to generate weak image labels that supervise a 2D network alongside synthetic data. We utilize a mean teacher framework to adapt from the synthetic domain to the real domain. (green) We train a 3D model using a mean teacher framework to utilize both weak annotations and unlabeled points. (red) We copy and freeze the trained 2D student model to generate per pixel features that act as a guidance for the 3D student features via an auxiliary loss.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Illustration of the one-way supervised contrastive loss. Points with pixel correspondence guide points outside of the image field-of-view via pull and push forces applied based on available weak labels.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Illustration of the proposed mixing strategy FOVMix that not only increases the variety within the training set but also generates new point pairing inside-outside of the image field-ofview to further guide all points.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Finally, we introduce a new mixing operation called FOVMix. Given two data samples (x A , y A , I A ) and (x B , y B , I B ), the goal of FOVMix is to generate a new training sample (x, ỹ, Ĩ). Simply put, we take an image from sample A and replace it with the image of sample B.", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Qualitative results comparing state-of-the-art scribble-supervised LiDAR semantic segmentation methods. As seen, utilizing 2D image features as guidance during the training pipeline, IGNet does improve (top) boundary estimation between classes, (middle) small object segmentation, (bottom) distant, sparse object recognition. We change the color of bicyclist in (middle) for better visibility.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Weakly-supervised 3D semantic segmentation results on ScribbleKITTI. We not only show results from our proposed imageguidance network (IGNet), but also its performance difference compared to the baseline Cylinder3D model and the results of using IGNet within a class-range-balanced self-training pipeline (IGNet++). * indicated methods that do not use Cylinder3D as their backbone.", "figure_data": "MethodmIoUcarbicyclem.cycletrucko.vehiclepersonbicyclistm.cyclistroadparkingsidewalko.groundbuildingfencevegetationtrunkterrainpolet.signCylinder3D [43] 57.0 88.5 39.9 58.0 58.4 48.1 68.6 77.0 0.5 84.4 30.4 72.2 2.5 89.4 48.4 81.9 64.6 59.8 61.2 48.7MinkNet* [9]58.5 91.1 23.8 59.0 66.3 58.6 65.2 75.2 0.0 83.8 36.1 72.4 0.7 90.2 51.8 86.7 68.5 72.5 62.5 46.6SPVCNN* [29] 56.9 88.6 25.7 55.9 67.4 48.8 65.0 78.2 0.0 82.6 30.4 70.1 0.3 90.5 49.6 84.4 67.6 66.1 61.6 48.7MT [30]59.0 91.0 41.1 58.1 85.5 57.1 71.7 80.9 0.0 87.2 35.1 74.6 3.3 88.8 51.5 86.3 68.0 70.7 63.4 49.5CBST [45]60.8 92.4 39.1 58.5 78.5 57.0 70.0 77.4 0.0 86.9 35.4 74.3 7.3 89.8 55.6 85.1 66.7 68.1 62.0 51.1DARS [12]60.8 91.9 39.3 57.9 78.6 53.3 69.5 77.1 0.0 86.6 37.2 74.2 8.3 89.8 54.5 86.5 68.8 70.1 63.4 49.0SSLSS [33]61.3 91.0 41.1 58.1 85.5 57.1 71.7 80.9 0.0 87.2 35.1 74.6 3.3 88.8 51.5 86.3 68.0 70.7 63.4 49.5IGNet (Ours)62.0 90.7 47.6 64.5 83.2 60.5 74.5 81.3 0.0 88.6 34.6 75.5 2.3 90.6 53.0 83.5 69.5 63.7 63.6 51.5∆ Cylinder3D+5.0 +2.2 +7.7 +6.5 +24.8 +12.4 +5.9 +4.3 -0.5 +4.2 +4.2 +3.3 -0.2 +1.2 +4.6 +1.6 +4.9 +3.9 +2.4 +2.8IGNet++ (Ours) 63.0 94.6 44.8 67.5 78.3 55.9 72.7 85.5 0.0 88.5 42.3 75.9 2.1 90.4 53.4 87.3 70.4 70.8 63.5 52.2SemanticKITTI [2]ScribbleKITTI [33]Method1% 10% 20% 50% 1% 10% 20% 50%Cylinder3D [43] 45.4 56.1 57.8 58.7 39.2 48.0 52.1 53.8DiAL [30, 32]45.4 57.1 59.2 60.0 41.0 50.1 52.8 53.9CBST [45]48.8 58.3 59.4 59.7 41.5 50.6 53.3 54.5CPS [7]46.7 58.7 59.6 60.5 41.4 51.8 53.9 54.8GPC [16]34.6 49.9 58.8-----WS3D [20]38.9 52.3 61.4-----LaserMix [17]50.6 60.0 61.9 62.3 44.2 53.7 55.1 56.8IGNet49.0 61.3 63.1 64.8 44.4 57.7 59.6 60.8∆ Cylinder3D+4.6 +5.2 +5.3 +4.1 +5.2 +9.7 +7.5 +7.0", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison of state-of-the-art methods for semi-supervised LiDAR semantic segmentation. The uniform frame sampling rate is indicated by [%].", "figure_data": "MT IG CL FOVMix mIoU rel ∆rel57.0 88.6-✓59.0 91.8 +3.2✓ ✓61.3 95.3 +6.7✓ ✓ ✓61.5 95.6 +7.0✓ ✓✓61.5 95.6 +7.0✓ ✓ ✓✓62.0 96.4 +7.8", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation study showing the effects of domain adaptation for the image-guidance network. (U) indicated unsupervised and (W) indicates weakly-supervised training.", "figure_data": "SourceTargetmIoU rel ∆ mIoU ∆ relBorderObjectDistanceSKITTI (W)-60.3 93.7--MethodTrue False Small Large 0-25m 25m+GTA-V-60.2 93.6--Cylinder3D 62.5 91.8 73.0 94.0 87.7 84.6GTA-VSKITTI (U) 61.1 95.0 +0.9+1.4MT62.0 92.7 76.9 95.0 88.4 85.2GTA-VSKITTI (W) 61.3 95.3 +1.1+1.7IGNet65.5 92.6 83.5 96.5 88.8 87.2∆ MT+3.5 -0.1 +6.6 +1.5 +0.4 +2.0", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation study on ScribbleKITTI showing where the accuracy improves with our proposed image-guidance module.", "figure_data": "Ground TruthCylinder3DMean TeacherSSLSSIGNet (Ours)", "figure_id": "tab_3", "figure_label": "5", "figure_type": "table" } ]
Ozan Unal; Dengxin Dai; Lukas Hoyer; Yigit Baran Can; Luc Van Gool
[ { "authors": "Xuyang Bai; Zeyu Hu; Xinge Zhu; Qingqiu Huang; Yilun Chen; Hongbo Fu; Chiew-Lan Tai", "journal": "", "ref_id": "b0", "title": "Transfusion: Robust lidar-camera fusion for 3d object detection with transformers", "year": "2022" }, { "authors": "Jens Behley; Martin Garbade; Andres Milioto; Jan Quenzel; Sven Behnke; Cyrill Stachniss; Jurgen Gall", "journal": "", "ref_id": "b1", "title": "Semantickitti: A dataset for semantic scene understanding of lidar sequences", "year": "2019" }, { "authors": "David Berthelot; Nicholas Carlini; Ian Goodfellow; Nicolas Papernot; Avital Oliver; Colin Raffel", "journal": "", "ref_id": "b2", "title": "Mixmatch: A holistic approach to semi-supervised learning", "year": "2019" }, { "authors": "Luca Caltagirone; Mauro Bellone; Lennart Svensson; Mattias Wahde", "journal": "Robotics and Autonomous Systems", "ref_id": "b3", "title": "Lidar-camera fusion for road detection using fully convolutional neural networks", "year": "2019" }, { "authors": "Paola Cascante-Bonilla; Fuwen Tan; Yanjun Qi; Vicente Ordonez", "journal": "", "ref_id": "b4", "title": "Curriculum labeling: Revisiting pseudolabeling for semi-supervised learning", "year": "2020" }, { "authors": "John Chen; Samarth Sinha; Anastasios Kyrillidis", "journal": "PMLR", "ref_id": "b5", "title": "Stackmix: A complementary mix algorithm", "year": "2022" }, { "authors": "Xiaokang Chen; Yuhui Yuan; Gang Zeng; Jingdong Wang", "journal": "", "ref_id": "b6", "title": "Semi-supervised semantic segmentation with cross pseudo supervision", "year": "2021" }, { "authors": "Mingmei Cheng; Le Hui; Jin Xie; Jian Yang", "journal": "", "ref_id": "b7", "title": "Sspc-net: Semi-supervised semantic 3d point cloud segmentation network", "year": "2021" }, { "authors": "Christopher Choy; Junyoung Gwak; Silvio Savarese", "journal": "", "ref_id": "b8", "title": "4d spatio-temporal convnets: Minkowski convolutional neural networks", "year": "2019" }, { "authors": "Gianni Franchi; Nacim Belkhir; Mai Lan Ha; Yufei Hu; Andrei Bursuc; Angela Volker Blanz; Yao", "journal": "", "ref_id": "b9", "title": "Robust semantic segmentation with superpixel-mix", "year": "2021" }, { "authors": "Andreas Geiger; Philip Lenz; Christoph Stiller; Raquel Urtasun", "journal": "The International Journal of Robotics Research", "ref_id": "b10", "title": "Vision meets robotics: The kitti dataset", "year": "2013" }, { "authors": "Ruifei He; Jihan Yang; Xiaojuan Qi", "journal": "", "ref_id": "b11", "title": "Re-distributing biased pseudo labels for semi-supervised semantic segmentation: A baseline investigation", "year": "2021" }, { "authors": "Lukas Hoyer; Dengxin Dai; Yuhua Chen; Adrian Köring; Suman Saha; Luc Van Gool", "journal": "", "ref_id": "b12", "title": "Three ways to improve semantic segmentation with self-supervised depth estimation", "year": "2021" }, { "authors": "Lukas Hoyer; Dengxin Dai; Luc Van Gool", "journal": "", "ref_id": "b13", "title": "DAFormer: Improving network architectures and training strategies for domain-adaptive semantic segmentation", "year": "2022" }, { "authors": "Qingyong Hu; Bo Yang; Linhai Xie; Stefano Rosa; Yulan Guo; Zhihua Wang; Niki Trigoni; Andrew Markham", "journal": "", "ref_id": "b14", "title": "Randla-net: Efficient semantic segmentation of large-scale point clouds", "year": "2020" }, { "authors": "Li Jiang; Shaoshuai Shi; Zhuotao Tian; Xin Lai; Shu Liu; Chi-Wing Fu; Jiaya Jia", "journal": "", "ref_id": "b15", "title": "Guided point contrastive learning for semi-supervised point cloud semantic segmentation", "year": "2021-10" }, { "authors": "Lingdong Kong; Jiawei Ren; Liang Pan; Ziwei Liu", "journal": "", "ref_id": "b16", "title": "Lasermix for semi-supervised lidar semantic segmentation", "year": "2022" }, { "authors": "Yingwei Li; Adams Wei Yu; Tianjian Meng; Ben Caine; Jiquan Ngiam; Daiyi Peng; Junyang Shen; Yifeng Lu; Denny Zhou; Quoc V Le", "journal": "", "ref_id": "b17", "title": "Deepfusion: Lidar-camera deep fusion for multi-modal 3d object detection", "year": "2022" }, { "authors": "Erin Venice; Thi Liong; Ngoc Tho; Sergi Nguyen; Dhananjai Widjaja; Zhuang Jie Sharma; Chong", "journal": "", "ref_id": "b18", "title": "Amvnet: Assertion-based multi-view fusion network for lidar semantic segmentation", "year": "2020" }, { "authors": "Kangcheng Liu; Yuzhi Zhao; Qiang Nie; Zhi Gao; Ben M Chen", "journal": "Springer", "ref_id": "b19", "title": "Weakly supervised 3d scene segmentation with region-level boundary awareness and instance discrimination", "year": "2022" }, { "authors": "Qinghao Meng; Wenguan Wang; Tianfei Zhou; Jianbing Shen; Luc Van Gool; Dengxin Dai", "journal": "Springer", "ref_id": "b20", "title": "Weakly supervised 3d object detection from lidar point cloud", "year": "2020" }, { "authors": "Wilhelm Viktor Olsson; Juliano Tranheden; Lennart Pinto; Svensson", "journal": "", "ref_id": "b21", "title": "Classmix: Segmentation-based data augmentation for semi-supervised learning", "year": "2021" }, { "authors": "T Boris; Anatoli B Polyak; Juditsky", "journal": "SIAM journal on control and optimization", "ref_id": "b22", "title": "Acceleration of stochastic approximation by averaging", "year": "1992" }, { "authors": "Hao Charles R Qi; Kaichun Su; Leonidas J Mo; Guibas", "journal": "", "ref_id": "b23", "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "year": "2017" }, { "authors": "Li Charles R Qi; Hao Yi; Leonidas J Su; Guibas", "journal": "", "ref_id": "b24", "title": "Point-net++: Deep hierarchical feature learning on point sets in a metric space", "year": "2017" }, { "authors": "Xuhong Ren; Bing Yu; Hua Qi; Felix Juefei-Xu; Zhuo Li; Wanli Xue; Lei Ma; Jianjun Zhao", "journal": "", "ref_id": "b25", "title": "Few-shot guided mix for dnn repairing", "year": "2020" }, { "authors": "Vibhav Stephan R Richter; Stefan Vineet; Vladlen Roth; Koltun", "journal": "", "ref_id": "b26", "title": "Playing for data: Ground truth from computer games", "year": "2016" }, { "authors": "Corentin Sautier; Gilles Puy; Spyros Gidaris; Alexandre Boulch; Andrei Bursuc; Renaud Marlet", "journal": "", "ref_id": "b27", "title": "Image-to-lidar self-supervised distillation for autonomous driving data", "year": "2022" }, { "authors": "* Haotian; Zhijian * Tang; Shengyu Liu; Yujun Zhao; Ji Lin; Hanrui Lin; Song Wang; Han", "journal": "", "ref_id": "b28", "title": "Searching efficient 3d architectures with sparse point-voxel convolution", "year": "2020" }, { "authors": "Antti Tarvainen; Harri Valpola", "journal": "", "ref_id": "b29", "title": "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results", "year": "2017" }, { "authors": "Hugues Thomas; Charles R Qi; Jean-Emmanuel Deschaud; Beatriz Marcotegui; Leonidas J Franc ¸ois Goulette; Guibas", "journal": "", "ref_id": "b30", "title": "Kpconv: Flexible and deformable convolution for point clouds", "year": "2019" }, { "authors": "Ozan Unal; Dengxin Dai; Ali Tamer Unal; Luc Van Gool", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b31", "title": "Discwise active learning for lidar semantic segmentation", "year": "2023" }, { "authors": "Ozan Unal; Dengxin Dai; Luc Van Gool", "journal": "", "ref_id": "b32", "title": "Scribblesupervised lidar semantic segmentation", "year": "2006" }, { "authors": "Ozan Unal; Luc Van Gool; Dengxin Dai", "journal": "", "ref_id": "b33", "title": "Improving point cloud semantic segmentation by learning 3d object detection", "year": "2021" }, { "authors": "Jiantao Xu Yan; Jie Gao; Ruimao Li; Zhen Zhang; Rui Li; Shuguang Huang; Cui", "journal": "", "ref_id": "b34", "title": "Sparse single sweep lidar point cloud segmentation via learning contextual shape priors from scene completion", "year": "2020" }, { "authors": "Jiantao Xu Yan; Chaoda Gao; Chao Zheng; Ruimao Zheng; Shenghui Zhang; Zhen Cui; Li", "journal": "", "ref_id": "b35", "title": "2dpass: 2d priors assisted semantic segmentation on lidar point clouds", "year": "2022" }, { "authors": "Sangdoo Yun; Dongyoon Han; Seong Joon Oh; Sanghyuk Chun; Junsuk Choe; Youngjoon Yoo", "journal": "", "ref_id": "b36", "title": "Cutmix: Regularization strategy to train strong classifiers with localizable features", "year": "2019" }, { "authors": "Feihu Zhang; Jin Fang; Benjamin Wah; Philip Torr", "journal": "Springer", "ref_id": "b37", "title": "Deep fusionnet for point cloud semantic segmentation", "year": "2020" }, { "authors": "Ke Zhang; Xiahai Zhuang", "journal": "", "ref_id": "b38", "title": "Cyclemix: A holistic strategy for medical image segmentation from scribble supervision", "year": "2022" }, { "authors": "Lin Zhao; Hui Zhou; Xinge Zhu; Xiao Song; Hongsheng Li; Wenbing Tao", "journal": "", "ref_id": "b39", "title": "Lif-seg: Lidar and camera image fusion for 3d lidar semantic segmentation", "year": "2021" }, { "authors": "Weikun Zhen; Yaoyu Hu; Jingfeng Liu; Sebastian Scherer", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b40", "title": "A joint optimization approach of lidar-camera fusion for accurate dense 3-d reconstructions", "year": "2019" }, { "authors": "Huazan Zhong; Hao Wang; Zhengrong Wu; Chen Zhang; Yongwei Zheng; Tao Tang", "journal": "Procedia Computer Science", "ref_id": "b41", "title": "A survey of lidar and camera fusion enhancement", "year": "2021" }, { "authors": "Xinge Zhu; Hui Zhou; Tai Wang; Fangzhou Hong; Yuexin Ma; Wei Li; Hongsheng Li; Dahua Lin", "journal": "", "ref_id": "b42", "title": "Cylindrical and asymmetrical 3d convolution networks for lidar segmentation", "year": "2020" }, { "authors": "Zhuangwei Zhuang; Rong Li; Kui Jia; Qicheng Wang; Yuanqing Li; Mingkui Tan", "journal": "", "ref_id": "b43", "title": "Perception-aware multi-sensor fusion for 3d lidar semantic segmentation", "year": "2021-10" }, { "authors": "Yang Zou; Zhiding Yu; B V K Vijaya Kumar; Jinsong Wang", "journal": "", "ref_id": "b44", "title": "Unsupervised domain adaptation for semantic segmentation via class-balanced self-training", "year": "2018-09" } ]
[ { "formula_coordinates": [ 3, 370.35, 556.4, 174.76, 12.47 ], "formula_id": "formula_0", "formula_text": "θ EMA t = αθ EMA t-1 + (1 -α)θ t(1)" }, { "formula_coordinates": [ 3, 352.71, 669.28, 192.4, 12.01 ], "formula_id": "formula_1", "formula_text": "L = H(ŷ, y) + 1 U (x) KL(ŷ || ŷEMA )(2)" }, { "formula_coordinates": [ 4, 308.86, 338.06, 236.25, 23.21 ], "formula_id": "formula_2", "formula_text": "x rec → (k, l) with k = ⌊x (0) rec /x (2) rec ⌋ and ⌊l = x (1) rec /x" }, { "formula_coordinates": [ 4, 315.32, 475.23, 229.79, 26.95 ], "formula_id": "formula_3", "formula_text": "L =H(ŷ, y) + 1 U (x) KL(ŷ || ŷEMA ) + L IG with L IG = 1 I (x, m(x)) KL(sm(f ) || sm(f IG ))(3)" }, { "formula_coordinates": [ 5, 105.39, 313.4, 180.97, 39.54 ], "formula_id": "formula_4", "formula_text": "L = L S + L DA with L S = H(γ(θ IG (I s )), S s ) and L DA = H(γ(θ IG (I t )), P t ) (4)" }, { "formula_coordinates": [ 5, 108.62, 507.48, 177.74, 16.94 ], "formula_id": "formula_5", "formula_text": "L DA = # » λ p H(γ(θ IG (I t )), P t )(5)" }, { "formula_coordinates": [ 5, 308.86, 512.68, 239.46, 47.64 ], "formula_id": "formula_6", "formula_text": "L CL = c o∈O (c) -log   1 |O (c) | i∈I (c) exp(f o • f IG,i /τ ) i ′ ∈I exp(f o • f IG,i ′ /τ )  (" }, { "formula_coordinates": [ 5, 314.86, 600.48, 230.25, 12.01 ], "formula_id": "formula_7", "formula_text": "L =H(ŷ, y) + 1 U (x) KL(ŷ || ŷEMA ) + L IG + λL CL (7)" }, { "formula_coordinates": [ 6, 94.05, 453.41, 192.32, 41.37 ], "formula_id": "formula_8", "formula_text": "x = [M AA ⊙ x A , (1 -M AA ) ⊙ x B ] ỹ = [M AB ⊙ y A , (1 -M AB ) ⊙ y B ] Ĩ = I A (8)" } ]
10.48550/arXiv.2302.04023
2023-11-27
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b35", "b2", "b9", "b14", "b51", "b18", "b21", "b37", "b4", "b3", "b1", "b43", "b12", "b14" ], "table_ref": [ "tab_3" ], "text": "Modern machine learning models typically require a huge collection of precisely labeled data, which can be a labor-intensive and time-consuming process. Even worse, it can be unrealistic in some practical scenarios that demand much expertise, such as medical diagnosis and industrial applications. To this end, a plethora of approaches have been investigated to reduce the burden of annotation, including semi-supervised learning (Sohn et al., 2020;Berthelot et al., 2019), learning with label noise (Han et al., 2018;Li et al., 2020), and so on. Amongst them, active learning (Ein-Dor et al., 2020; Yuan et al., 2020;Margatina et al., 2021) is a prominent solution that interactively queries an external expert or oracle to mark the new data points that the model wants to learn from. These methods alleviate the labeling burden to some extent but still require human efforts in the annotation or construction of the oracle to start with.\nThe recent prevalent large language models (LLMs) (Ouyang et al., 2022;Thoppilan et al., 2022;OpenAI, 2023), such as ChatGPT and PaLM (Chowdhery et al., 2022), have exhibited strong zero-shot learning ability by proper prompt design, yet becoming a new remedy for data efficiency. Even more inspiringly, LLMs emerge with the so-called in-context learning (ICL) (Brown et al., 2020) ability to learn from a few task-related labeled samples for boosted performance. Despite the promise, some studies (Bang et al., 2023) find that LLMs tend to underperform compared to finetuned small language models (SLMs) on challenging tasks, which is also verified in our empirical studies (Table 3). One possible reason is that ICL can not fully exploit supervised training samples due to limited context length. Moreover, their extremely large size and limited accessibility also hinder their training and generalization on specific tasks. To date, it is still questionable how can we generalize to downstream tasks with the least human annotation in the era of LLMs.\nIn this work, we present a novel collaborative learning paradigm FreeAL that revolutionizes traditional active learning by interactively distilling and filtering the task-related knowledge from the LLMs. Our intuition is that, while LLMs are hard to fine-tune, they are competent zero-shot learners (Wei et al., 2022;Kojima et al., 2022) and can provide coarse-grained knowledge for downstream tasks. On the other hand, SLMs are effective weak learners (Li et al., 2020) that can distill valuable clean samples from noisy supervision. To integrate LLMs and SLMs synergistically as a whole, we design a collaborative training framework where LLM operates as an active annotator infusing its knowledge and the SLM acts as a student to filter out the high-quality input-label pairs to feed back the LLM for subsequent label refinery. Empirically, FreeAL iteratively boosts the unsupervised performance of both SLMs and LLMs during collaborative training for transductive and inductive settings. As depicted in Figure 1, FreeAL allows us to achieve an extraordinary annotation-performance trade-off by obtaining competitive results on par with the supervised counterparts while fully eliminating human annotation costs.\nOverall, our main contributions can be summarized as follows,\n• To the best of our knowledge, we are among the first to overhaul traditional active learning in the era of LLMs for boosted generalization performance without any human supervision.\n• We propose a novel collaborative learning framework called FreeAL to employ the LLMs as active annotators and the SLMs as weak filters to interactively distill the taskrelated knowledge from the LLMs.\n• Our proposed FreeAL largely improves the unsupervised learning performance for both the LLMs and the SLMs, even approaching the supervised counterparts in some scenarios.\nOur results prove the feasibility of human-free active labeling in the era of LLMs.\n2 Related Work" }, { "figure_ref": [], "heading": "Prompt-based Zero/Few-shot Learning", "publication_ref": [ "b48", "b27", "b3", "b15", "b17", "b7", "b43", "b26", "b45", "b19" ], "table_ref": [], "text": "The emergent ability of LLMs has sparked heightened interest in prompt-based zero-shot and fewshot learning (Ye et al., 2021;Schick and Schütze, 2021). Instead of fine-tuning on massive downstream data, in-context learning (ICL) (Brown et al., 2020), which suits LLMs to new tasks with fewshot input-label exemplars as demonstrations without training, has shown promising few-shot performance. It has been further improved by later works (Liu et al., 2022;Lu et al., 2022;SU et al., 2023).\nOn the other hand, zero-shot learning is much more challenging without task-specific data. Direct steering LLMs for predictions without in-context demonstrations can lead to significantly degraded performance (Gao et al., 2021). To bridge this, some methods (Wei et al., 2022;Sanh et al., 2022;Xu et al., 2022) adopt instruction tuning with a multi-task paradigm to further pre-train the LLMs with a collection of different tasks in shared prompting templates. However, these methods require cumbersome training for LLMs and the overwhelming bulk of cross-task human annotations. Another new line of research (Ye et al., 2022a;Meng et al., 2022;Ye et al., 2022b) endeavors to ameliorate zero-shot learning merely via dataset generation, while the synthesized data commonly involves a notable portion of low-quality samples and misses the nuanced semantics present in the original data. In our work, we take inspiration from active learning with an innovative viewpoint to distill and filter the rich knowledge from LLMs for boosted zero-shot generalization performance." }, { "figure_ref": [], "heading": "Active Learning", "publication_ref": [ "b51", "b32", "b41", "b31", "b24", "b18", "b29", "b25", "b0" ], "table_ref": [], "text": "Active learning (AL) is a prevailing paradigm in various NLP tasks (Yuan et al., 2020;Zhao et al., 2020;Shelmanov et al., 2021;Wang et al., 2022) that aims to reduce labeling effort by selecting only the most useful examples to annotate. In each iteration of active learning, a model is trained on the currently labeled data and then tasked with selecting the most informative yet-to-be-labeled data point to be labeled for boosted performance. Based on different querying strategies (Settles and Craven, 2008), existing traditional active learning methods can be categorized into uncertainty-based methods (Prabhu et al., 2019;Margatina et al., 2021) and diversity-based methods (Sener and Savarese, 2018;Ru et al., 2020;Ash et al., 2020). While these methods relieve the annotation burden to some extent, they still count on human experts as expensive supervision sources to start with. To overcome this high cost, we investigate the opportunities of leveraging the rich knowledge of LLMs as a lowcost supervision source for boosting generalization performance without human effort." }, { "figure_ref": [], "heading": "Background", "publication_ref": [], "table_ref": [], "text": "We consider unsupervised classification tasks without human annotations. Given an unlabeled training dataset D train = {x i } n i=1 with n samples, where x ∈ X is the input text and the corresponding ground-truth label y ∈ Y = {1, . . . , C} is inaccessible. Our task is to predict the true label for both the training dataset D train and test dataset D test . Our framework employs a pre-trained large language model (LLM) P and a downstream small language model (SLM) S. For the LLM, we define a natural language template T (•) which contains additional task-related information and a verbalizer V (•) which maps each class label in {1, . . . , C} to a pre-defined token in the prompt. For the finetuning of SLM S with parameters θ, we adopt the cross entropy loss l i = -j∈Y ỹi j log S j (x i , θ)\nfor training, where S j (x i , θ) is the j-th entry of SLM's output softmax probability for the input x i with the pseudo label ỹi j . )} m j=1 retrieved from D demo as the demonstration. The final prompt steers the LLM and the prediction is obtained via," }, { "figure_ref": [], "heading": "Few-shot", "publication_ref": [], "table_ref": [], "text": "arg max P y∈Y (V (y) | T (x demo 1 , ỹdemo 1 ), ..., T (x demo m , ỹdemo m ), T (x test ))(1)\nDespite the simplicity, the success of ICL largely hinges on the demonstration pool D demo , which requires human efforts of careful annotation for every individual scenario and can be particularly annoying for challenging tasks. To bridge this gap, we resort to our proposed plug-and-play method FreeAL without involving any human supervision." }, { "figure_ref": [ "fig_1" ], "heading": "FreeAL", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce our proposed framework FreeAL which investigates the opportunity for human-free active learning in the LLMs era. In contrast to traditional active learning that requests human annotation in each training loop, FreeAL employs LLMs as weak annotators. In each training loop, we alternate the following steps:\n1. Active labeling of the to-be-labeled samples via LLMs based on the feedback from SLMs.\n2. Training weakly supervised SLMs to distill the task-related knowledge from noisy annotations of LLMs and in turn feedback to them.\nThe overview of the FreeAL framework is displayed in Figure 2 and its overall pipeline is also shown in Algorithm 1. In what follows, we will elaborate on our FreeAL framework minutely." }, { "figure_ref": [], "heading": "Active Labeling by LLMs", "publication_ref": [ "b20", "b15" ], "table_ref": [], "text": "In this step, we leverage the strong in-context learning ability of LLMs to assign weak labels to unsupervised training corpora. In particular, the core challenge lies in the construction of a proper prompt containing demonstration samples. To this end, we introduce two practical strategies for the different life cycles of FreeAL.\nInitial Annotation by Self-generated Demonstration. At the initial round of FreeAL, we are given a purely unsupervised training set D train . To enable pseudo-labeling via LLMs, we may directly perform zero-shot ICL without access to a demonstration pool D demo . However, such a strategy can largely impede the knowledge-distilling process of SLMs due to shoddy initial annotations. To remedy this, we design a novel self-generated demonstration technique by virtual data generation. Notably, when given some unlabeled samples and taskdescriptive instructions, humans can imitate the expression styles of these texts and leverage their own knowledge to generate similar label-aware samples.\nMotivated by this, we steer LLMs to first mimic the format of unlabeled samples from D train , which is important to ICL according to recent research (Min et al., 2022), and then generate new label-aware examples to construct the initial D demo . Specifically, the data-generation prompt contains a hand-crafted task-descriptive instruction ρ gen that explains the task background and Q randomly-selected unlabeled samples c gen from D train as prototypes to imitate. An example of the prompt is shown in Appendix B.3. The generation process can be formulated as,\n{(x gen , ỹgen )} ← P (ρ gen , T (c gen ))(2)\nThe generated samples constitute the generated dataset D gen = {(x gen , ỹgen )}, which is then used as demonstration pool (i.e.,D demo = D gen ) for the subsequent labeling. Next, we follow the standard ICL pipelines with demonstration selection (Liu et al., 2022). Each prompt contains m-nearestneighbors from D demo with the highest embedding similarity to x i . The ICL process follows Eq.( 1).\nWith the demonstrations seen in the prompt, the LLM is able to provide passable initial annotations ỹ of the training dataset D train = {x i , ỹi } n i=1 , the annotations ỹ are employed as pseudo-labels for the subsequent training of SLM." }, { "figure_ref": [], "heading": "Algorithm 1 Pipeline of FreeAL", "publication_ref": [], "table_ref": [], "text": "Input: Unlabeled dataset D train = {x i } n i=1 ; pretrained LLM P and a downstream SLM S; 1: round ← 1 2: while not convergent do In-context learning as Eq.( 1) for labeling; \nD noisy = D train \\ (∪ j∈Y D j clean ) 20:\nFeed D demo and D noisy back to LLM 21:\nround ← round + 1; 22: end while Refined Annotation in Subsequent Rounds. In the later rounds, the SLM S is trained using the weak annotation given by the LLM P. Meanwhile, the SLM filters out a high-quality demonstration pool D demo as feedback; details are shown in Section 4.2. Then with a high-quality D demo , the LLM P re-annotates the remaining noisy-prone samples via few-shot ICL according to Eq. (1)." }, { "figure_ref": [], "heading": "Knowledge Distillation by SLMs", "publication_ref": [ "b9", "b14" ], "table_ref": [], "text": "Given the acquired weak annotations from LLM, it is difficult for the LLM to distinguish its own errors due to the confirmation bias. Fortunately, previous studies (Han et al., 2018;Li et al., 2020) in weaklysupervised learning have shown that deep models have the potential of detecting noisy samples during the training procedure. Therefore, after receiving weak labels, our intention is two-fold: (i)-train a strong and robust downstream SLM that maximally distills task-specific knowledge; (ii)-employ the derived SLM to filter out a high-quality demonstration pool to feedback LLM." }, { "figure_ref": [], "heading": "Robust Self-training", "publication_ref": [ "b52", "b14", "b30" ], "table_ref": [], "text": "Motivated by the memorization effect of DNNs (Zhang et al., 2017), the SLM tends to first fit easy patterns in the early stage of training. Thus, noisy samples mostly pose larger loss values. To this end, we adopt the selection-based technique (Li et al., 2020) from noisy label learning to train a robust SLM for knowledge distillation.\nFormally, after a few warm-up epochs with standard training on noisy labels, given the standard cross-entropy loss l i that reflects how well the model fits the sample x i , we fit a two-component GMM to the loss l i to find out those clean samples. Let w i = p(g | l i ) represent the probability of x i belonging to the Gaussian component with smaller mean g, which can also be deemed as its clean probability. Then we divide the training dataset into a clean subset and a noisy subset by setting a threshold τ on w i , which is considered as a labeled set D l and a noisy set D u respectively,\nD l = {(x i , ỹi ) | x i ∈ D train , w i ≥ τ } , D u = {(x i ) | x i ∈ D train , w i < τ }(3)\nTo improve the robustness of training, we utilize consistency regularization for boosted performance, which assumes that a classifier should produce a similar prediction on a local neighbor of each data point. Given an input x i , we adopt backtranslation (Sennrich et al., 2016) to paraphrase it and obtain the augmented version x aug i . For the labeled and unlabeled data, the consistency regularizations are formulated,\nL l cr = 1 |D l | x i ∈D l l ce (x aug i , ỹi ), L u cr = 1 |D u | x i ∈Du l kl (S(x aug i ), S(x i )) (4)\nwhere l ce and l kl are standard cross entropy and KL divergence respectively. Finally, the total loss for self-training of SLM is aggregated,\nL total = L clean + α(L l cr + L u cr ) (5)\nwhere L clean is the cross entropy loss on D l , α is the loss weight parameter. We refer readers to Appendix B.1 for more implementation details." }, { "figure_ref": [], "heading": "Demonstration Pool Filtering", "publication_ref": [ "b52" ], "table_ref": [], "text": "While the SLM S can filter out a clean subset to enhance its performance during self-training, other stubborn noisy labels are hard to correct by SLM itself due to the confirmation bias. Thanks to our robust SLM, we can filter out those clean and representative samples and construct a high-quality demonstration pool D demo for the LLM to refurbish its potentially wrong predictions in previous rounds. One may directly reuse the GMM-based selection criterion again and take D l as demonstrations. However, such a selection procedure is too aggressive since excessively over-selecting some noisy samples may still improve the self-training procedure. To this end, we would like to filter out a more curated D demo that prioritizes representative examples with accurate labels to be included. The construction process mainly contains two steps in a class-wise manner to cover every class and ensure diversity. For the training subset D j train of class j, following the memory effect of DNNs (Zhang et al., 2017), we utilize the small loss criterion and select samples with the smallest crossentropy loss l i in the first R percent to construct\nD j clean = {(x i , ỹi ) | rank(l i ) ≤ R%, ỹi = j}.\nIn practice, we set a small R to ensure the high precision of D j clean . Secondly, we further adopt a simple clustering algorithm k-medoids on the embeddings of SLM to filter out the most representative medoids samples from D j clean to construct D j demo . When the k-medoids algorithm gets converged, the medoids of k clusters are collected as D j demo . Finally the integral demonstration set is merged from each class as D demo = ∪ j∈Y D j demo . With a high quality D demo , the great potential of LLM P can be unleashed to refine those noisyprone samples D noisy = D train \\ (∪ j∈Y D j clean ) via few-shot ICL as described in section 4.1." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [], "table_ref": [], "text": "In this section, we provide the experimental results to verify the effectiveness of the FreeAL framework. More results, including visualizations and model selection, can be found in Appendix." }, { "figure_ref": [ "fig_1" ], "heading": "Setup", "publication_ref": [ "b34", "b23", "b22", "b39", "b38", "b3", "b15", "b10", "b33", "b11", "b18", "b44", "b8" ], "table_ref": [], "text": "Datasets. We evaluate the performance of FreeAL on both sequence-level and token-level tasks. For sequence-level tasks, we choose SST-2 (Socher et al., 2013), MR (Pang and Lee, 2005) dataset for sentiment classification, SUBJ (Pang and Lee, 2004) dataset for subjectivity classification and TREC (Voorhees and Tice, 2000) for topic classification. For token-level tasks, CoNLL03 (Tjong Kim Sang and De Meulder, 2003) 2. For all experiments, we run three times and report the averaged results.\nBaselines. We compare FreeAL with multiple zero-shot and supervised baselines for LLMs and SLMs respectively. For LLMs, they are vanilla zero-shot ICL without demonstrations (Brown et al., 2020), supervised ICL with standard demonstration retrieval (Liu et al., 2022) from humanlabeled training data, and supervised ICL with kmedoids to first filter a representative subset for demonstration retrieval. For SLMs, they are zeroshot distillation (Hinton et al., 2015;Smith et al., 2022) that finetunes the SLMs by using the annotations from zero-shot ICL of LLM as ground-truths, and standard supervised fine-tuning that finetunes the SLM with human-labeled data. We also compare FreeAL with some traditional active learning baselines in section 5.3.1, including (1) Random: It acquires annotation of to-be-labeled data randomly.\n(2) Entropy (Holub et al., 2008): It is the most commonly used uncertainty-based baseline that acquires samples with the highest predictive entropy.\n(3) CAL (Margatina et al., 2021): It is a recent active learning method that acquires contrastive examples for pre-trained language models.\nImplementation Details. We adopt OpenAI's GPT-3.5-Turbo language model, also known as ChatGPT, as our LLM and we use RoBERTa-Base from Huggingface Transformers (Wolf et al., 2020) as the downstream SLM. For the biomedical tasks including MA and BC5DER dataset, we utilize a BioMed-RoBERTa-base (Gururangan et al., 2020) that is pre-trained on the Semantic Scholar corpus as SLM for boosted performance. For fair comparisons, all the ICL processes of the LLM comprise m = 10 context examples as demonstrations except on MA dataset 5 is adopted due to the maximum context length 4,096 for GPT-3.5-Turbo. The collaborative training process is performed on the training dataset first and then the fine-tuned SLM and the D demo for LLM are utilized to be evaluated on the test dataset. More details of the robust self-training are put in Appendix B.2." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [ "tab_1", "tab_2" ], "text": "Table 1 and Table 2 display the results of FreeAL at different rounds in the collaborative training progress on the training and test dataset for transductive and inductive performance respectively. Table 3 reports the comparisons of FreeAL with other zero-shot and supervised counterparts.\nBased on these results, it can be observed that FreeAL significantly enhances the unsupervised performance of both LLM and SLM. Free exceeds the zero-shot ICL for the LLM by 3.44%, 3.05%, and 2.60% on the SST-2, MR, and TREC dataset respectively. It reaches a staggering lead of 34.6% on the SUBJ dataset where the LLM fails to adapt to on its own. In the medical diagnosis and biochemical fields, FreeAL also exhibits a notable advantage of 12.9% and 23.1% on the chemical and disease tasks of the BC5CDR dataset. FreeAL showcases a similar trend of leading performance for the SLMs. Interestingly, In comparison to the supervised counterparts, FreeAL achieves competitive performance on par with these supervised rivals on some simple tasks such as SST-2 and SUBJ datasets and greatly narrows the gap between the zero-shot and fully-supervised performances on other challenging tasks. Notably, the performance can be further improved with more interaction rounds (also larger cost), but 4 rounds of interaction can achieve satisfactory results empirically. These results suggest that FreeAL is able to fully distill the task-related knowledge from LLMs' weak supervision. More analyses can be found in Section 5.3.2." }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Comparisons with Active Learning", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "We also compare our FreeAL framework with some traditional active learning methods on the SST-2 and MR dataset. As shown in Table 5 andFig " }, { "figure_ref": [], "heading": "Effect of Collaborative Training", "publication_ref": [], "table_ref": [ "tab_1", "tab_2", "tab_7" ], "text": "From a more nuanced perspective of the performance improvements at different rounds on the training set in Table 1, it can be noticed that FreeAL iteratively refines the noisy annotations during the collaborative training. The improvement from round 0 to 1 indicates the effectiveness of selfgenerated demonstrations for better initial annotations. The performance advancements from rounds 1 to 2 and rounds 3 to 4 demonstrate the ability of robust self-training for SLM to distill valuable knowledge from noisy annotations. Further, the performance boost from round 2 to round 3 verifies the efficacy of the active label refinery process. For the test dataset in Table 2, the performance changes follow a similar trend with some fluctuations, which have been further discussed in Appendix A.1.\nTo further validate the efficacy of collaborative training, we also conduct additional ablation experiments for the components of FreeAL as shown in Table 6. For the generalization performance on the SLM, we compare FreeAL with its variant that discards robust self-training and adopts the standard cross entropy loss for training (including round 2 and 4). It can be observed that robust self-training largely improves the performance of FreeAL. For the performance of LLM, we ablate FreeAL with other selection strategies from traditional active learning rather than small loss selection, including random selection and entropy selection that selects samples with the lowest entropy values with the same budget as small loss selection. We can see that entropy selection slightly makes up for the poor performance of random selection, but still lags behind FreeAL by a notable margin." }, { "figure_ref": [ "fig_4" ], "heading": "Impact of In-Context Examples m", "publication_ref": [], "table_ref": [], "text": "Then, we show the effect of different numbers m of in-context examples during the process of ICL on the SST-2 and MR datasets. As shown in Figure 3, FreeAL is able to produce a competitive performance to the supervised rivals over a wide range of m from 1 to 20, this further verifies the robustness of FreeAL and we can simply adopt m = 10 for fair comparisons in our experiments." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we overhaul the traditional active learning in the era of LLMs and propose a novel framework called FreeAL that merely relies on the knowledge of LLMs to enhance human-free generalization performance. The key idea of FreeAL is to distill and filter the task-related knowledge from LLMs with a collaborative framework, where the LLM is employed as an active annotator and the SLM is engaged as a weak learner to filter out valuable samples for label refinery. The empirical results indicate that FreeAL can largely improve unsupervised performance and reaches comparable performance with supervised rivals in some tasks. While our FreeAL framework operates autonomously without human supervision, it is flexible and can be easily boosted with additional limited human supervision, which we leave for our future work. We hope that our work can spark heightened interest in developing new active annotation algorithms in the era of LLMs." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Our proposed FreeAL is a collaborative framework that aims to enhance unsupervised performance without human effort. Despite its effectiveness, there is still much potential for improvement. First, the effectiveness of FreeAL largely hinges on the strong ability of LLMs. For some domains that are extremely challenging or eccentric, the commonly adopted GPT-3.5-Turbo nowadays may fail to provide a qualified initial annotation, even with self-generated demonstrations. Our model is anticipated to be suitable for these circumstances with the advancement of more powerful LLMs across diverse domains. Besides, we thoroughly forgo human efforts in our FreeAL framework while in practical scenarios there may exist more or less available human support. It remains underexplored how to effectively combine the supervision from human experts and LLMs to synergize their individual strengths, and we leave it for our future work." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "While our proposed FreeAL serves as an innovative way to enhance generalization performance without human intervention, the predictions and self-generated demonstrations of the adopted LLM API may include bias and unfairness. Indeed, if one utilizes FreeAL with such biased annotations, it may unpleasantly yield unfair and biased predictions based on characteristics like race, gender, disabilities, LGBTQ, or political orientation. To alleviate this issue, we recommend that potential users first use bias reduction and correction techniques to remove biased text and predictions so as to improve overall fairness and ethical standard. " }, { "figure_ref": [], "heading": "A Additional Experimental Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1 Discussion on Model Selection", "publication_ref": [], "table_ref": [], "text": "With our collaborative training paradigm, we are able to interactively distill and filter task-related knowledge from LLMs. Empirically, our FreeAL method significantly enhances the zero-shot (distillation) performance of both SLMs and LLMs as discussed in Section 5. One intriguing finding is that, in the majority of evaluation cases, the final SLMs outperform the LLMs. This observation can be attributed to the superior distillation ability of SLMs during the weakly-supervised fine-tuning process. Consequently, we believe that SLMs remain a viable choice for practical deployment due to their impressive fine-tuned performance and low computational requirements. Furthermore, in more general scenarios, we recommend the utilization of a validation set to determine the most suitable model for deployment." }, { "figure_ref": [], "heading": "A.2 FreeAL Can Reduce the Annotation Cost", "publication_ref": [ "b40", "b6" ], "table_ref": [ "tab_9", "tab_4", "tab_10" ], "text": "As FreeAL solely depends on the knowledge of LLM and not on human efforts, it can naturally be leveraged as a low-cost data labeler in real-world scenarios. In Table 7, we evaluate the cost disparity between FreeAL and human annotations. Following previous work (Wang et al., 2021), for human labeling, it costs $0.11 per 50 input tokens with a minimum of $0.11. For FreeAL, the cost per example for m-shot inference is estimated approximately as (#T oken × (m + 1) + 100) × 2 × (2 × 10 -6 ), where #T oken is the average token numbers in Table 4, (2 × 10 -6 ) is the cost for GPT-3.5-Turbo per token, 100 is roughly the tokens for the taskspecific descriptions and the model reply. For each sample, the ICL is performed at most twice as initial annotations and refined annotations. It can be observed that FreeAL can serve as a much cheaper data labeler while achieving passable performance.\nWhen entrusted with a training set that is too large to label the entire dataset, the annotation cost can be further reduced by a simple multiround solution. The core idea is to rely more on the weakly-supervised-learning capability of SLM to distill from a small number of annotated labels.\nSpecifically, for the initial annotation round of LLM, we randomly sample a subset of P % samples (empirically we set P = 10) to be annotated by LLM. After that, for robust self-training, we perform the original training process for the labeled data D labeled and simply extend the consistency regularization L u cr for the noisy set D u to the originally unlabeled data (i.e., D u = D u ∪ D unlabeled ). For the demonstration pool filtering, the construction process of D demo is the same, while for D noisy we randomly sample another subset of P % samples from the unlabeled samples to be annotated by LLM for the next iterations. The amount of iteration rounds can be larger than the original FreeAL if available to gradually distill the taskrelated knowledge with limited annotation cost.\nAs shown in the Table 8, such a simple remedy is able to achieve competitive results close to the original FreeAL with merely 10% of the previous cost each round, which proves the feasibility of FreeAL when we cannot afford to label the entire dataset. Notably, the process of randomly sampling the to-be-annotated subset on SLMs can be further improved with other advanced query strategies (e.g., uncertainty-based), which is a classic topic in traditional active learning.\nWe further supplement comparisons with some dataset-generation-based methods, including Ze-roGen (Ye et al., 2022a), ProGen (Ye et al., 2022b) and SunGen (Gao et al., 2023). Our FreeAL is fundamentally different from them in several perspectives. First, these dataset-generation-based methods are tailored for an extreme scenario where training data is completely missing, which is unpractical in reality. Second, these methods typically generate low-quality samples, because they overlook the nuances and semantics present in the original authentic data. As a result, they mostly require generating a huge amount of synthetic data for decent performance. For example, on the SST-2 dataset, these methods generate 200k synthesized samples while authentic training samples are only " }, { "figure_ref": [], "heading": "A.6 Results with More Distillation Methods", "publication_ref": [ "b42", "b16" ], "table_ref": [ "tab_17" ], "text": "We also provide the comparisons with some other robust distillation methods, including GCE (Zhang and Sabuncu, 2018), SL (Wang et al., 2019) and ELR (Liu et al., 2020) in Table 13. We can see that FreeAL largely advances the performances of all these distillation baselines. Overall, FreeAL is designed as a flexible framework and we choose an empirically strong self-training algorithm for distillation to prove the feasibility of human-free active learning. One may design more power distillation algorithms for improved results, which we leave for future work." }, { "figure_ref": [], "heading": "A.7 Effect of Interaction for LLM and SLM", "publication_ref": [], "table_ref": [], "text": "To further demonstrate the importance of interaction between the LLM and the SLM. We provide the inductive performance for FreeAL without interaction. For the LLM, it directly adopts its own predictions on the training dataset at round 1 as the demonstration pool directly for testing. While the SLM employs its own predicted labels as supervision at round 2 directly for testing.\nAs displayed in Table 9, we observe that the SLM itself is hard to distill from its own predic- " }, { "figure_ref": [ "fig_5", "fig_7" ], "heading": "A.8 Additional Visualization Results", "publication_ref": [], "table_ref": [], "text": "We further provide some additional visualization results, including the transductive performance on the training dataset (i.e., the accuracy of pseudo labels) at different rounds in Figure 4 and the visualization of comparisons with traditional active learning methods on the MR dataset in Figure 5. " }, { "figure_ref": [], "heading": "B Additional Implementation Details", "publication_ref": [], "table_ref": [], "text": "(x m ) = σEmb(x i ) + (1 -σ)Emb(x j ) y m = σ ỹi + (1 -σ) ỹj(6)\nwhere Emb(x i ) is the embedding of x i and σ ∼ Beta(ς, ς) and we simply set ς = 4. The mixup loss is denoted as L mix . the total loss for selftraining of SLM is aggregated, \nL total = L clean + α(L cr + L mix )(7)" }, { "figure_ref": [], "heading": "B.2 More Implementation Details", "publication_ref": [], "table_ref": [], "text": "In our experiments, for the LLM API, we follow the default official settings of the GPT-3.5-Turbo-0301 version. In the demonstration retrieval of ICL, we adopt the unsupervised embeddings with bertbase-uncased at the initial annotation round and the embeddings of SLM for later rounds. The construction and retrieval of D demo are both performed in a class-wise manner to compose the final demonstrations. For the robust self-training of SLM, we adopt the hyperparameters either from previous works or fixed at a moderate value empirically without careful tuning. We finetune the SLM on the basis of the trainer of Huggingface for 50 epochs. The batch size is fixed at 32 with a maximum sequence length of 128. We adopt the AdamW optimizer with a learning rate selected from {3e -4, 3e -5, 3e -6} and a weight decay of 0.01. For robust self-training, the threshold τ of GMM selection is fixed at 0.7 and the ratio R of demonstration selection is fixed at 20. The loss weight parameter α is linearly ramped up from 0 to 1 to avoid overfitting false labels at the start. For evaluation of performance for LLM, as LLMs sometimes output ambiguous predictions outside the label space, these values are treated as random labels in the label space and repeated multiple times to evaluate the average performance\nStep Prompt Details" }, { "figure_ref": [], "heading": "Demonstration Generation", "publication_ref": [], "table_ref": [], "text": "You are required to produce 100 English examples with labels for the task of text classification on the MR (Movie Review) dataset. These samples will be used as prompt examples for the GPT model. MR dataset is used in sentiment-analysis experiments and this dataset contains moviereview documents labeled with respect to their overall sentiment polarity (positive or negative). The task is to classify a movie review as positive or negative according to their overall sentiment polarity. For example, 100 of the unlabeled samples in MR dataset are as follows: [\"review\": \"enigma is well-made , but it's just too dry and too placid .\"] [\"review\": \"the weakest of the four harry potter books has been transformed into the stronger of the two films by the thinnest of margins .\"] ......" }, { "figure_ref": [], "heading": "Active Annotation", "publication_ref": [], "table_ref": [], "text": "You are a helpful assistant for the task of text classification on the MR (Movie Review) dataset. You reply with brief, to-the-point answers with no elaboration as truthfully as possible. MR (Movie Review) dataset is used in sentiment-analysis experiments and this dataset contains moviereview documents labeled with respect to their overall sentiment polarity (positive or negative). Your task is to a binary classification to classify a movie review as positive or negative according to their overall sentiment polarity. The category is divided into two types: 'positive' and 'negative'. Given a movie review: <QUERY>. How do you feel about the sentiment polarity of the given movie review, is this positive or negative? please answer in a single line with 'positive' or 'negative'. during evaluation. Then in subsequent rounds, the SLMs adopt their own previous predictions to replace these ambiguous annotations of LLMs for robust self-training. For token-level tasks, as the selection process is performed on the token level in a different manner, we select those tokens with high confidence and matched predictions to pseudolabels as clean and then filter out those samples whose tokens are all clean to constitute the clean subset. The consistency regularization and mixup loss are only suitable for the sequence-level tasks and are disabled in the token-level NER tasks." }, { "figure_ref": [], "heading": "B.3 Prompt Design", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "We provide our prompt design on the MR dataset for the initial demonstration generation step and active annotation step in Table 14. Notably, we adopt the GPT-3.5-Turbo as our LLM so the prompts are also in the chat style with instructions." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work is majorly supported by the NSFC under Grants (No. 62206247), and in part by the National Key Research and Development Program of China (No. 2022YFB3304101). Junbo Zhao also thanks the sponsorship by the Fundamental Research Funds for the Central Universities (No. 226-2022-00028). This paper is also supported by Netease Youling Crowdsourcing Platform 1 . As the importance of data continues rising, Netease Youling Crowdsourcing Platform is dedicated to utilizing various advanced algorithms to provide high-quality, low-noise labeled samples." }, { "figure_ref": [], "heading": "A.3 Impact of SLM's Size", "publication_ref": [], "table_ref": [], "text": "We also conduct experiments to reveal the impact of the size of SLM. As depicted in Table 10, when the size of SLM grows larger from RoBERTa-Base to RoBERTa-Large, FreeAL displays superior performance. This observation indicates that our FreeAL is compatible with different sizes of downstream SLM and the performance can be further improved with a larger SLM." }, { "figure_ref": [], "heading": "A.4 Comparisons with Other AL Methods", "publication_ref": [ "b49", "b0", "b50" ], "table_ref": [], "text": "Here we provide comparisons with some other active learning selection strategies, including Prob-Cover (Yehuda et al., 2022), BADGE (Ash et al., 2020), Region Entropy and Region CAL (Yu et al., 2022) in the Table 11. It can be observed that FreeAL exceeds all its rivals, which consistently demonstrates the superior performance of FreeAL.\nA.5 Comparisons with Dataset-generation-based Methods" } ]
Collecting high-quality labeled data for model training is notoriously time-consuming and labor-intensive for various NLP tasks. While copious solutions, such as active learning for small language models (SLMs) and prevalent in-context learning in the era of large language models (LLMs), have been proposed and alleviate the labeling burden to some extent, their performances are still subject to human intervention. It is still underexplored how to reduce the annotation cost in the LLMs era. To bridge this, we revolutionize traditional active learning and propose an innovative collaborative learning framework FreeAL to interactively distill and filter the task-specific knowledge from LLMs. During collaborative training, an LLM serves as an active annotator inculcating its coarse-grained knowledge, while a downstream SLM is incurred as a student to filter out high-quality in-context samples to feedback LLM for the subsequent label refinery. Extensive experiments on eight benchmark datasets demonstrate that FreeAL largely enhances the zero-shot performances for both SLM and LLM without any human supervision.
FreeAL: Towards Human-Free Active Learning in the Era of Large Language Models
[ { "figure_caption": "Figure 1 :1Figure 1: Comparisons of FreeAL with traditional active learning (AL) algorithms and supervised fine-tuning on the SST-2 dataset. FreeAL surpasses all the active learning rivals and achieves near-supervised performance without human annotation.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Overview of FreeAL. In each collaborative training loop, the LLM serves as an active annotator imbuing its knowledge. Then the SLM is employed as a filter to distill the task-related knowledge with robust self-training from LLM and filter out a high-quality demonstration pool D demo to feedback the subsequent label refinery of LLM.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "x gen , ỹgen )} as Eq.(2); 7:D demo = D gen = {(x gen , ỹgen )};", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Ablation study of different numbers of incontext examples m on the SST-2 and MR dataset.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Performance of FreeAL on the training set at different rounds during collaborative training.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Table 14 :14An example of prompt design on the MR dataset for the step of demonstration generation and active annotations. The in-context examples are omitted for the active annotation process here.", "figure_data": "", "figure_id": "fig_6", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Comparisons of FreeAL with traditional active learning (AL) algorithms and supervised fine-tuning on the MR dataset. FreeAL surpasses all the active learning rivals and achieves near-supervised performance without human annotation.", "figure_data": "", "figure_id": "fig_7", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "dataset Comparisons of transductive performance on training datasets of different tasks. BC5-C/D refers to BC5CDR-Chemical/Disease dataset. For the token-level NER tasks (including CoNLL03, BC5-C, BC5-D) the F1-score is given and For the other sequence-level tasks the test accuracy is provided.", "figure_data": "ModelRound Demons/AnnosSST-2 MR SUBJ TREC CoNLL03 MA BC5-C BC5-D0Zero-shot88.93 89.99 57.11 43.3664.1959.51 69.2827.74GPT-3.5-Turbo1Self-generated92.16 91.74 86.54 70.7470.8959.78 81.0547.123Selected by Round 294.93 92.89 90.33 77.7074.7161.38 82.4052.59RoBERTa2 4Annotated by Round 1 94.70 92.43 92.24 76.75 Annotated by Round 3 95.49 92.64 92.85 81.5974.49 78.7961.41 81.61 62.15 82.8152.89 59.25ModelRound Demons/AnnosSST-2 MR SUBJ TREC CoNLL03 MA BC5-C BC5-D0Zero-shot92.47 90.05 55.65 77.2066.4759.71 67.8529.60GPT-3.5-Turbo1Self-generated93.73 90.85 83.85 80.0070.2259.97 76.9050.683Selected by Round 295.91 93.10 90.27 79.8070.8060.93 80.7752.70RoBERTa2 4Annotated by Round 1 94.29 89.35 92.95 86.80 Annotated by Round 3 94.66 90.20 94.45 91.4071.82 76.1261.91 80.55 62.64 81.1353.38 58.90", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparisons of inductive performance on test datasets of different tasks. BC5-C/D refers to BC5CDR-Chemical/Disease dataset. For the token-level NER tasks (including CoNLL03, BC5-C, and BC5-D) the F1-score is given and For the other sequence-level tasks the test accuracy is provided.", "figure_data": "Performance Evaluation. In this work, we eval-uate FreeAL from two aspects: (i)-TransductivePerformance: Given unsupervised training data,we evaluate the training accuracy of FreeAL whichreflects how well task-specific knowledge is dis-tilled; (ii)-Inductive Generalization: utilize thederived models, including the SLM and D demo forLLM, to further assess the generalization efficiencyon the unseen test set with the inductive learningparadigm. We report the classification accuracy orthe F1 score on both training and testing sets. Wetest the performance at different rounds. Round 0denotes vanilla zero-shot learning of LLM. Round1 and round 2 denote the performance of LLM andSLM in the first training loop, while round 3 and4 are those of the second refinery training loop, asshown in Figure", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Performance comparison of FreeAL with the zero-shot and supervised counterparts on the test dataset. BC5-C/D refers to BC5CDR-Chemical/Disease dataset. The results of FreeAL are in bold. Supervised FT refers to supervised fine-tuning. The absolute gain indicates the improvement of FreeAL compared to the zero-shot baseline.", "figure_data": "ModelAblationHuman SST-2 MR SUBJ TREC CoNLL MA BC5-C BC5-DZero-shot ICL✗92.47 90.05 55.65 77.20 66.47 59.71 67.85 29.60FreeAL (ours)✗95.91 93.10 90.27 79.80 70.80 60.93 80.77 52.70GPT-3.5-Turbo∆ Absolute gain-+3.44 +3.05 +34.6 +2.60 +4.33 +1.22 +12.9 +23.1Supervised ICL (Standard)✓96.06 92.85 89.30 81.50 85.46 61.22 82.24 68.63Supervised ICL (k-medoids)✓96.10 93.19 90.35 82.60 84.97 61.13 82.06 67.93Zero-shot distillation✗92.81 88.60 59.25 82.80 69.71 61.22 77.05 31.98RoBERTaFreeAL (ours) ∆ Absolute gain✗ -94.66 90.20 94.45 91.40 76.12 62.64 81.13 58.90 +1.85 +1.60 +35.2 +8.60 +6.41 +1.42 +4.08 +26.9Supervised FT✓94.89 91.05 95.95 96.70 88.11 63.96 87.26 75.38DatasetDomain#Token #Train #TestSST-2Sentiment cls19.36,920 1,821MRSentiment cls21.68,662 2,000SUBJSubjectivity cls24.58,000 2,000TRECTopic cls10.25,452500CoNLL03NER14.59 14,041 3,453MAMedical cls205.3 11,550 2,888BC5CDRNER25.924,560 4,797", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "ure 1, It can be observed that FreeAL outstrips the traditional active learning baselines with 20% and 50% acquired human annotations, which further indicates that FreeAL can serve as a superior alternative to traditional active learning by leveraging the rich knowledge of LLMs as a low-cost human-free supervision source.", "figure_data": "MethodHuman AnnoSST-2MRRandom20% samples 50% samples92.42 92.9288.10 89.10Entropy20% samples 50% samples92.37 94.2988.65 90.00CAL20% samples 50% samples93.36 94.5688.45 89.75FreeAL (ours)✗94.6690.20", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparisons of FreeAL with traditional active learning algorithms on the SST-2 and MR dataset.", "figure_data": "", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Ablation study of FreeAL for the SLM and LLM on the SST-2 dataset and MR dataset.", "figure_data": "ModelAblationSST-2 MRRoBERTaFreeAL w/o robust self-training 89.18 88.95 94.66 90.20FreeAL95.91 93.10GPT-3.5-Turbowith random selection95.12 92.15with entropy selection95.65 92.67", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "learning requires rethinking generalization. In 5th International Conference on LearningRepresentations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. ", "figure_data": "Zhilu Zhang and Mert R. Sabuncu. 2018. Generalizedcross entropy loss for training deep neural networkswith noisy labels. In Advances in Neural InformationProcessing Systems 31: Annual Conference on Neu-ral Information Processing Systems 2018, NeurIPS2018, December 3-8, Montréal, Canada, pages8792-8802.Yuekai Zhao, Haoran Zhang, Shuchang Zhou, and Zhi-hua Zhang. 2020. Active learning approaches toenhancing neural machine translation. In Findingsof the Association for Computational Linguistics:EMNLP 2020, pages 1796-1806, Online. Associationfor Computational Linguistics.", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparisons of annotation cost($) per example between human labeling and FreeAL.", "figure_data": "SourceSST-2MRSUBJ TRECMAHuman0.110.110.110.110.55FreeAL 1.2e -3 1.3e -3 1.5e -3 8.5e -4 4.5e -3ModelRound AnnotationsSST-2 MR-Vanilla FreeAL94.66 90.20RoBERTa1 2Initial 10% from LLM Another 10% from LLM 93.69 87.75 87.97 81.203Another 10% from LLM 93.76 88.95", "figure_id": "tab_9", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Results with multi-round annotation strategies.", "figure_data": "", "figure_id": "tab_10", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Comparisons of FreeAL with some other active learning algorithms on the SST-2 and MR dataset.6.9k. Empirically, our FreeAL still outperforms these dataset-generation-based methods by a notable margin, as shown in Table12.", "figure_data": "", "figure_id": "tab_12", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "Comparisons of FreeAL with datasetgeneration-based methods. We adopt DistilBERT as the SLM of FreeAL for fair comparisons.", "figure_data": "tions due to the inevitable confirmation bias, e.g.,improves 0.2% compared with FreeAL's improve-ment of 0.85% on the MR dataset and even de-grades on the SST-2 dataset. For the LLM, it canself-improve itself, but still underperforms our col-laborative mechanism. Notably, LLM has an in-escapable upper bound on the performance, accord-ing to our empirical findings where SLM outper-forms LLM on 6 out of a total of 8 datasets. Suchresults indicate that the interaction between LLMand SLM can bring new opportunities to convergeto a consensus result between them.", "figure_id": "tab_14", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "Comparisons with more distillation methods.", "figure_data": "", "figure_id": "tab_17", "figure_label": "13", "figure_type": "table" } ]
Ruixuan Xiao; Yiwen Dong; Junbo Zhao; Runze Wu; Minmin Lin; Gang Chen; Haobo Wang
[ { "authors": "Jordan T Ash; Chicheng Zhang; Akshay Krishnamurthy; John Langford; Alekh Agarwal", "journal": "", "ref_id": "b0", "title": "Deep batch active learning by diverse, uncertain gradient lower bounds", "year": "2020-04-26" }, { "authors": "Yejin Bang; Samuel Cahyawijaya; Nayeon Lee; Wenliang Dai; Dan Su; Bryan Wilie; Holy Lovenia; Ziwei Ji; Tiezheng Yu; Willy Chung; Quyet V Do; Yan Xu; Pascale Fung", "journal": "", "ref_id": "b1", "title": "A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity", "year": "2023" }, { "authors": "David Berthelot; Nicholas Carlini; Ian J Goodfellow; Nicolas Papernot; Avital Oliver; Colin Raffel", "journal": "", "ref_id": "b2", "title": "Mixmatch: A holistic approach to semisupervised learning", "year": "2019-12-08" }, { "authors": "B Tom; Benjamin Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Sandhini Askell; Ariel Agarwal; Gretchen Herbert-Voss; Tom Krueger; Rewon Henighan; Aditya Child; Daniel M Ramesh; Jeffrey Ziegler; Clemens Wu; Christopher Winter; Mark Hesse; Eric Chen; Mateusz Sigler; Scott Litwin; Benjamin Gray; Jack Chess; Christopher Clark; Sam Berner; Alec Mccandlish; Ilya Radford; Dario Sutskever; Amodei", "journal": "", "ref_id": "b3", "title": "Language models are few-shot learners", "year": "2020-12-06" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann; Parker Schuh; Kensen Shi; Sasha Tsvyashchenko; Joshua Maynez; Abhishek Rao; Parker Barnes; Yi Tay; Noam Shazeer; Emily Vinodkumar Prabhakaran; Nan Reif; Ben Du; Reiner Hutchinson; James Pope; Jacob Bradbury; Michael Austin; Guy Isard; Pengcheng Gur-Ari; Toju Yin; Anselm Duke; Sanjay Levskaya; Sunipa Ghemawat; Henryk Dev; Xavier Michalewski; Vedant Garcia; Kevin Misra; Liam Robinson; Denny Fedus; Daphne Zhou; David Ippolito; Hyeontaek Luan; Barret Lim; Alexander Zoph; Ryan Spiridonov; David Sepassi; Shivani Dohan; Mark Agrawal; Andrew M Omernick; Thanumalayan Dai; Marie Sankaranarayana Pillai; Aitor Pellat; Erica Lewkowycz; Rewon Moreira; Oleksandr Child; Katherine Polozov; Zongwei Lee; Xuezhi Zhou; Brennan Wang; Mark Saeta; Orhan Diaz; Michele Firat; Jason Catasta; Kathy Wei; Douglas Meier-Hellstern; Jeff Eck; Slav Dean; Noah Petrov; Fiedel", "journal": "", "ref_id": "b4", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Liat Ein-Dor; Alon Halfon; Ariel Gera; Eyal Shnarch; Lena Dankin; Leshem Choshen; Marina Danilevsky; Ranit Aharonov; Yoav Katz; Noam Slonim", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Active Learning for BERT: An Empirical Study", "year": "2020" }, { "authors": "Jiahui Gao; Renjie Pi; Yong Lin; Hang Xu; Jiacheng Ye; Zhiyong Wu; Weizhong Zhang; Xiaodan Liang; Zhenguo Li; Lingpeng Kong", "journal": "", "ref_id": "b6", "title": "Self-guided noise-free data generation for efficient zero-shot learning", "year": "2023-05-01" }, { "authors": "Tianyu Gao; Adam Fisch; Danqi Chen", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Making pre-trained language models better few-shot learners", "year": "2021" }, { "authors": "Suchin Gururangan; Ana Marasović; Swabha Swayamdipta; Kyle Lo; Iz Beltagy; Doug Downey; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "Don't stop pretraining: Adapt language models to domains and tasks", "year": "2020" }, { "authors": "Bo Han; Quanming Yao; Xingrui Yu; Gang Niu; Miao Xu; Weihua Hu; Ivor W Tsang; Masashi Sugiyama", "journal": "", "ref_id": "b9", "title": "Co-teaching: Robust training of deep neural networks with extremely noisy labels", "year": "2018-12-03" }, { "authors": "Geoffrey E Hinton; Oriol Vinyals; Jeffrey Dean", "journal": "", "ref_id": "b10", "title": "Distilling the knowledge in a neural network", "year": "2015" }, { "authors": "Alex Holub; Pietro Perona; Michael C Burl", "journal": "IEEE Computer Society", "ref_id": "b11", "title": "Entropy-based active learning for object recognition", "year": "2008-06-28" }, { "authors": "Takeshi Kojima; Shane Shixiang; Machel Gu; Yutaka Reid; Yusuke Matsuo; Iwasawa", "journal": "", "ref_id": "b12", "title": "Large language models are zero-shot reasoners", "year": "2022" }, { "authors": "Jiao Li; Yueping Sun; Robin J Johnson; Daniela Sciaky; Chih-Hsuan Wei; Robert Leaman; Allan Peter Davis; Carolyn J Mattingly; Thomas C Wiegers; Zhiyong Lu", "journal": "Database J. Biol. Databases Curation", "ref_id": "b13", "title": "Biocreative V CDR task corpus: a resource for chemical disease relation extraction", "year": "2016" }, { "authors": "Junnan Li; Richard Socher; Steven C H Hoi", "journal": "", "ref_id": "b14", "title": "Dividemix: Learning with noisy labels as semi-supervised learning", "year": "2020-04-26" }, { "authors": "Jiachang Liu; Dinghan Shen; Yizhe Zhang; Bill Dolan; Lawrence Carin; Weizhu Chen", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "What makes good in-context examples for GPT-3?", "year": "2022" }, { "authors": "Sheng Liu; Jonathan Niles-Weed; Narges Razavian; Carlos Fernandez-Granda", "journal": "", "ref_id": "b16", "title": "Early-learning regularization prevents memorization of noisy labels", "year": "2020-12-06" }, { "authors": "Yao Lu; Max Bartolo; Alastair Moore; Sebastian Riedel; Pontus Stenetorp", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Fantastically ordered prompts and where to find them: Overcoming fewshot prompt order sensitivity", "year": "2022" }, { "authors": "Katerina Margatina; Giorgos Vernikos; Loïc Barrault; Nikolaos Aletras", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Active learning by acquiring contrastive examples", "year": "2021" }, { "authors": "Yu Meng; Jiaxin Huang; Yu Zhang; Jiawei Han", "journal": "", "ref_id": "b19", "title": "Generating training data with language models: Towards zero-shot language understanding", "year": "2022" }, { "authors": "Sewon Min; Xinxi Lyu; Ari Holtzman; Mikel Artetxe; Mike Lewis; Hannaneh Hajishirzi; Luke Zettlemoyer", "journal": "Association for Computational Linguistics. OpenAI", "ref_id": "b20", "title": "Rethinking the role of demonstrations: What makes in-context learning work?", "year": "2022" }, { "authors": "Long Ouyang; Jeffrey Wu; Xu Jiang; Diogo Almeida; Carroll L Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray; John Schulman; Jacob Hilton; Fraser Kelton; Luke Miller; Maddie Simens; Amanda Askell; Peter Welinder; Paul F Christiano; Jan Leike; Ryan Lowe", "journal": "", "ref_id": "b21", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Bo Pang; Lillian Lee", "journal": "", "ref_id": "b22", "title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts", "year": "2004" }, { "authors": "Bo Pang; Lillian Lee", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales", "year": "2005" }, { "authors": "Ameya Prabhu; Charles Dognin; Maneesh Singh", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Sampling bias in deep active classification: An empirical study", "year": "2019" }, { "authors": "Dongyu Ru; Jiangtao Feng; Lin Qiu; Hao Zhou; Mingxuan Wang; Weinan Zhang; Yong Yu; Lei Li", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Active sentence learning by adversarial uncertainty sampling in discrete space", "year": "2020" }, { "authors": "Victor Sanh; Albert Webson; Colin Raffel; Stephen H Bach; Lintang Sutawika; Zaid Alyafeai; Antoine Chaffin; Arnaud Stiegler; Arun Raja; Manan Dey; M Saiful Bari; Canwen Xu; Urmish Thakker; Shanya Sharma Sharma; Eliza Szczechla; Taewoon Kim; Gunjan Chhablani; V Nihal; Debajyoti Nayak; Jonathan Datta; Mike Chang; Tian-Jian; Han Jiang; Matteo Wang; Sheng Manica; Zheng Xin Shen; Harshit Yong; Rachel Pandey; Thomas Bawden; Trishala Wang; Jos Neeraj; Abheesht Rozen; Andrea Sharma; Thibault Santilli; Jason Févry; Alan Fries; Ryan Teehan; Le Teven; Stella Scao; Leo Biderman; Thomas Gao; Alexander M Wolf; Rush", "journal": "", "ref_id": "b26", "title": "Multitask prompted training enables zero-shot task generalization", "year": "2022-04-25" }, { "authors": "Timo Schick; Hinrich Schütze", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Exploiting cloze-questions for few-shot text classification and natural language inference", "year": "2021" }, { "authors": "Tim Schopf; Daniel Braun; Florian Matthes", "journal": "", "ref_id": "b28", "title": "Evaluating unsupervised text classification: Zero-shot and similarity-based approaches", "year": "2022" }, { "authors": "Ozan Sener; Silvio Savarese", "journal": "", "ref_id": "b29", "title": "Active learning for convolutional neural networks: A core-set approach", "year": "2018-04-30" }, { "authors": "Rico Sennrich; Barry Haddow; Alexandra Birch", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Improving neural machine translation models with monolingual data", "year": "2016" }, { "authors": "Burr Settles; Mark Craven", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "An analysis of active learning strategies for sequence labeling tasks", "year": "2008" }, { "authors": "Artem Shelmanov; Dmitri Puzyrev; Lyubov Kupriyanova; Denis Belyakov; Daniil Larionov; Nikita Khromov; Olga Kozlova; Ekaterina Artemova; V Dmitry; Alexander Dylov; Panchenko", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Active learning for sequence tagging with deep pre-trained models and Bayesian uncertainty estimates", "year": "2021" }, { "authors": "Ryan Smith; Jason A Fries; Braden Hancock; Stephen H Bach", "journal": "", "ref_id": "b33", "title": "Language models in the loop: Incorporating prompting into weak supervision", "year": "2022" }, { "authors": "Richard Socher; Alex Perelygin; Jean Wu; Jason Chuang; Christopher D Manning; Andrew Ng; Christopher Potts", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "year": "2013" }, { "authors": "Kihyuk Sohn; David Berthelot; Nicholas Carlini; Zizhao Zhang; Han Zhang; Colin Raffel; Ekin Dogus Cubuk; Alexey Kurakin; Chun-Liang Li", "journal": "", "ref_id": "b35", "title": "Fixmatch: Simplifying semi-supervised learning with consistency and confidence", "year": "2020-12-06" }, { "authors": "S U Hongjin; Jungo Kasai; Chen Henry Wu; Weijia Shi; Tianlu Wang; Jiayi Xin; Rui Zhang; Mari Ostendorf; Luke Zettlemoyer; Noah A Smith; Tao Yu", "journal": "", "ref_id": "b36", "title": "Selective annotation makes language models better few-shot learners", "year": "2023" }, { "authors": "Romal Thoppilan; Daniel De Freitas; Jamie Hall; Noam Shazeer; Apoorv Kulshreshtha; Heng-Tze; Alicia Cheng; Taylor Jin; Leslie Bos; Yu Baker; Yaguang Du; Hongrae Li; Huaixiu Lee; Amin Steven Zheng; Marcelo Ghafouri; Yanping Menegali; Maxim Huang; Dmitry Krikun; James Lepikhin; Dehao Qin; Yuanzhong Chen; Zhifeng Xu; Adam Chen; Maarten Roberts; Yanqi Bosma; Chung-Ching Zhou; Igor Chang; Will Krivokon; Marc Rusch; Kathleen S Pickett; Meredith Ringel Meier-Hellstern; Tulsee Morris; Renelito Delos Doshi; Toju Santos; Johnny Duke; Ben Soraker; Vinodkumar Zevenbergen; Mark Prabhakaran; Ben Diaz; Kristen Hutchinson; Alejandra Olson; Erin Molina; Josh Hoffman-John; Lora Lee; Ravi Aroyo; Alena Rajakumar; Matthew Butryna; Viktoriya Lamm; Joe Kuzmina; Aaron Fenton; Rachel Cohen; Ray Bernstein; Blaise Kurzweil; Claire Agüera Y Arcas; Marian Cui; Ed H Croak; Quoc Chi; Le", "journal": "", "ref_id": "b37", "title": "Lamda: Language models for dialog applications", "year": "2022" }, { "authors": "Erik F Tjong; Kim Sang; Fien De; Meulder ", "journal": "", "ref_id": "b38", "title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition", "year": "2003" }, { "authors": "Ellen M Voorhees; Dawn M Tice", "journal": "ACM", "ref_id": "b39", "title": "Building a question answering test collection", "year": "2000-07-24" }, { "authors": "Shuohang Wang; Yang Liu; Yichong Xu; Chenguang Zhu; Michael Zeng", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "Want to reduce labeling cost? GPT-3 can help", "year": "2021" }, { "authors": "Xudong Wang; Long Lian; Stella X Yu", "journal": "Springer", "ref_id": "b41", "title": "Unsupervised selective labeling for more effective semisupervised learning", "year": "2022-10-23" }, { "authors": "Yisen Wang; Xingjun Ma; Zaiyi Chen; Yuan Luo; Jinfeng Yi; James Bailey", "journal": "IEEE", "ref_id": "b42", "title": "Symmetric cross entropy for robust learning with noisy labels", "year": "2019-10-27" }, { "authors": "Jason Wei; Maarten Bosma; Y Vincent; Kelvin Zhao; Adams Wei Guu; Brian Yu; Nan Lester; Andrew M Du; Quoc V Dai; Le", "journal": "", "ref_id": "b43", "title": "Finetuned language models are zero-shot learners", "year": "2022-04-25" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Remi Louf; Morgan Funtowicz; Joe Davison; Sam Shleifer; Clara Patrick Von Platen; Yacine Ma; Julien Jernite; Canwen Plu; Teven Xu; Sylvain Le Scao; Mariama Gugger; Quentin Drame; Alexander Lhoest; Rush", "journal": "Association for Computational Linguistics", "ref_id": "b44", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "Hanwei Xu; Yujun Chen; Yulun Du; Nan Shao; Wang Yanggang; Haiyu Li; Zhilin Yang", "journal": "Association for Computational Linguistics", "ref_id": "b45", "title": "Zero-Prompt: Scaling prompt-based pretraining to 1,000 tasks improves zero-shot generalization", "year": "2022" }, { "authors": "Jiacheng Ye; Jiahui Gao; Qintong Li; Hang Xu; Jiangtao Feng; Zhiyong Wu; Tao Yu; Lingpeng Kong", "journal": "Association for Computational Linguistics", "ref_id": "b46", "title": "ZeroGen: Efficient zero-shot learning via dataset generation", "year": "2022" }, { "authors": "Jiacheng Ye; Jiahui Gao; Zhiyong Wu; Jiangtao Feng; Tao Yu; Lingpeng Kong", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "ProGen: Progressive zero-shot dataset generation via in-context feedback", "year": "2022" }, { "authors": "Qinyuan Ye; Bill Yuchen Lin; Xiang Ren", "journal": "Association for Computational Linguistics", "ref_id": "b48", "title": "CrossFit: A few-shot learning challenge for crosstask generalization in NLP", "year": "2021" }, { "authors": "Ofer Yehuda; Avihu Dekel; Guy Hacohen; Daphna Weinshall", "journal": "", "ref_id": "b49", "title": "Active learning through a covering lens", "year": "2022" }, { "authors": "Yue Yu; Lingkai Kong; Jieyu Zhang; Rongzhi Zhang; Chao Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b50", "title": "AcTune: Uncertainty-based active self-training for active fine-tuning of pretrained language models", "year": "2022" }, { "authors": "Michelle Yuan; Hsuan-Tien Lin; Jordan Boyd-Graber", "journal": "Association for Computational Linguistics", "ref_id": "b51", "title": "Cold-start active learning through selfsupervised language modeling", "year": "2020" }, { "authors": "Chiyuan Zhang; Samy Bengio; Moritz Hardt; Benjamin Recht; Oriol Vinyals", "journal": "", "ref_id": "b52", "title": "Understanding deep", "year": "2017" } ]
[ { "formula_coordinates": [ 3, 322.52, 379.3, 202.62, 31.89 ], "formula_id": "formula_0", "formula_text": "arg max P y∈Y (V (y) | T (x demo 1 , ỹdemo 1 ), ..., T (x demo m , ỹdemo m ), T (x test ))(1)" }, { "formula_coordinates": [ 4, 105.1, 569.21, 184.77, 13.05 ], "formula_id": "formula_1", "formula_text": "{(x gen , ỹgen )} ← P (ρ gen , T (c gen ))(2)" }, { "formula_coordinates": [ 4, 307.77, 363, 168.86, 26.24 ], "formula_id": "formula_2", "formula_text": "D noisy = D train \\ (∪ j∈Y D j clean ) 20:" }, { "formula_coordinates": [ 5, 97.17, 356.89, 192.7, 27.36 ], "formula_id": "formula_3", "formula_text": "D l = {(x i , ỹi ) | x i ∈ D train , w i ≥ τ } , D u = {(x i ) | x i ∈ D train , w i < τ }(3)" }, { "formula_coordinates": [ 5, 98.83, 523.44, 191.04, 64.63 ], "formula_id": "formula_4", "formula_text": "L l cr = 1 |D l | x i ∈D l l ce (x aug i , ỹi ), L u cr = 1 |D u | x i ∈Du l kl (S(x aug i ), S(x i )) (4)" }, { "formula_coordinates": [ 5, 114.11, 646.83, 175.75, 14.37 ], "formula_id": "formula_5", "formula_text": "L total = L clean + α(L l cr + L u cr ) (5)" }, { "formula_coordinates": [ 5, 306.14, 342.47, 220.18, 15.86 ], "formula_id": "formula_6", "formula_text": "D j clean = {(x i , ỹi ) | rank(l i ) ≤ R%, ỹi = j}." }, { "formula_coordinates": [ 15, 97.77, 656.32, 192.1, 28.6 ], "formula_id": "formula_7", "formula_text": "(x m ) = σEmb(x i ) + (1 -σ)Emb(x j ) y m = σ ỹi + (1 -σ) ỹj(6)" }, { "formula_coordinates": [ 15, 111.01, 754.83, 178.86, 10.82 ], "formula_id": "formula_8", "formula_text": "L total = L clean + α(L cr + L mix )(7)" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "The challenge focuses on evaluating end-to-end perception tasks on detection, tracking, and multi-agent forecasting on Argoverse 2 sensor dataset. The dataset provides track annotations for 26 object categories. For testing, our algorithm needs to be able to detect objects in the current frame and forecast trajectories for the next 3 seconds. The end-to-end task is different from the motion forecasting task since the tracking ground truths are not provided." }, { "figure_ref": [ "fig_0" ], "heading": "Method", "publication_ref": [ "b1" ], "table_ref": [], "text": "Motivated by UniAD [2], we propose an end-to-end framework for detection, tracking, and forecasting. We fuse the BEV features from LiDAR and multi-view cameras as a unified representation for all three downstream tasks. HD * Work done as an intern at Lenovo Research. map is encoded as vectors to help with motion forecasting. The system overview is shown in figure 1." }, { "figure_ref": [], "heading": "BEV Feature", "publication_ref": [ "b8", "b2", "b3", "b2" ], "table_ref": [], "text": "For LiDAR point cloud, we employ a LIDAR BEV encoder based on SECOND [9] to generate LIDAR BEV features B l . For multi-view images, we adopt a spatiotemporal transformer based on BEVFormer [3] to generate BEV features from multi-view cameras B c . The camera BEV branch has two modules: the backbone network and the BEV encoder.\nThe BEV features from LiDAR B l and multi-view cameras B c are fused into one BEV feature by a spatial encoder following BEVFusion [4]. The spatial encoder concatenates B l and B c and then reduces the features dimensions through a convolution layer. After spatial fusion, historical BEV features are fused with the current frame by the spatial-temporal transformer in BEVformer [3]. The spatial-temporal fused BEV feature is used as a 3D representation and input to downstream heads." }, { "figure_ref": [], "heading": "Detector", "publication_ref": [ "b11" ], "table_ref": [], "text": "The detector is based on Deformable DETR [12]. The temporal fused BEV features are fed into the decoder as object queries. The Deformable DETR head is used to predict 3D bounding boxes and velocity without Non-Maximum Suppression (NMS). 3D box regression is supervised using the L1 loss. The detection queries capture the agent characteristic by attending to the BEV features." }, { "figure_ref": [], "heading": "Tracker", "publication_ref": [ "b9" ], "table_ref": [], "text": "Tracking is initialized by object queries from the detector as the tracking candidate in each frame. While track queries, which are based on MOTR [10], are used to associate track queries in the current frame and the previous " }, { "figure_ref": [], "heading": "VecterMap Encoder", "publication_ref": [ "b0" ], "table_ref": [], "text": "HD maps are typically represented by vectorized spatial coordinates. To encode the information on lanes and pedestrian crossings, we adopt a vectorized encoding method called VectorNet [1], which operates on the vectorized HD maps to avoid lossy rendering and computationally inten-sive ConvNet encoding steps. The map elements are encoded by cross-attention layers and are represented as map queries. We generate the position encoding with the center of each vector. The map queries and the position encoding are forwarded to the Motion Head to help with motion forecasting." }, { "figure_ref": [], "heading": "Motion Head", "publication_ref": [], "table_ref": [], "text": "The motion head takes in the agent's information from the Detection and the map information from the Vector " }, { "figure_ref": [], "heading": "Test Time Augmentation and Ensemble", "publication_ref": [ "b6" ], "table_ref": [], "text": "During inference, we apply Test Time Augmentation (TTA) to further improve the performance. In addition, we use NMS to merge the results of augmented input.\nWe use the Weighted Box Fusion (WBF) [7] to ensemble multiple models with different training settings to improve detection and forecasting prediction accuracy. For E2E forecasting, we use a two-step ensemble procedure to ensemble not only the detection bounding boxes, but also future trajectories. In step 1, we cluster the detection bounding boxes according to the intersection-over-union (IoU). In step 2, we cluster the forecasting trajectories with L2 distances and adaptively adjust the threshold based on the speed of instances." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [], "table_ref": [], "text": "The competition used the Argoverse 2 Sensor Dataset, which consisted of 1000 scenes (750 for training, 150 for validation and 150 for testing) with a total of 4.2 hours of driving data. The total dataset is extracted in the form of 1 TB of data. Each vehicle log has a duration of approximately 15 seconds and includes an average of approximately 150 LiDAR scans with 10 FPS LiDAR frames. The dataset has 7 surrounding cameras with 20 FPS. For the E2E Forecasting track, 1 keyframe is sampled in 2Hz from the training, validation, and testing sets." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b7", "b4", "b5" ], "table_ref": [], "text": "Detection. Argoverse [8] proposes a new metric Composite Detection Score (CDS) which simultaneously measures precision, recall, object extent, translation error, and orientation. The mean metrics are computed as an average of 26 different object categories.\nTracking. HOTA [5] is the key metric for the challenge, while AMOTA and MOTA are also important metrics for reference. HOTA explicitly balances the effect of performing accurate detection, association, and localization in a single, unified metric. MOTA combines false positives, missed targets, and identifies switches to compute tracking accuracy. AMOTA, similar to MOTA, is averaged over all recall thresholds to consider the confidence of predicted tracks.\nForecasting. The main evaluation metric is Forecasting mAP (mAP F) [6], ADE, and FDE which are averaged over static, and non-linearly moving cohorts. mAP F is the key metric for the challenge, which defines a true positive when there is a positive match in both the current timestamp T and the future (final) in the T + N time slot. ADE is an average L2 distance between the best-forecasted trajectory and the ground truth. FDE is an L2 distance between the endpoint of the best-forecasted trajectory and the ground truth." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b10" ], "table_ref": [], "text": "Architecture details. In the LiDAR branch, the voxel size of the LiDAR encoder is (0.075m, 0.075m, 0.2m) and the point clouds range is limited to [-54m, 54m] x [-54m, 54m] x [-3m, 3m] to adapt the maximum range of E2E forecasting. In LiDAR backbone, we down-sampled voxels to 1/8. For the camera branch, we crop and resize camera images to 976x1440 to save GPU memory. we use the ResNet-101 as a backbone and a 4-layer FPN as a neck to extract features from multiview cameras. Training. We apply a 2-step training procedure. First, we train the detector for 6 epochs. Then we train the entire endto-end network to optimize the detector, tracker, and motion head simultaneously for 20 epochs. We freeze the LiDAR and image backbones in step 2 to save GPU memory.\nThe models are trained by AdamW optimizer, with a learning rate of 2e-4, a weight decay of 0.01, and a total batch size of 8 on 8 V100 GPUs. We use cosine annealing to decay the learning rate. We applied CBGS (Class-Balanced Grouping and Sampling) [11] to get the expert model for balanced data distribution. TTA and Ensemble. For every model, we employ global scaling with [0.95, 1, 1.05] and flipping with respect to the xz-plane and yz-plane for TTA. We trained multiple models with three voxel sizes of [0.05m, 0.075m, 0.1m], with or without CBGS augmentation and with or without camera input. Totally we ensemble 8 models to generate final results." }, { "figure_ref": [], "heading": "Final Results", "publication_ref": [], "table_ref": [ "tab_0", "tab_1", "tab_2" ], "text": "We test our solution on 3 sub-challenges of Detection, Tracking, and Forecasting in the E2E Forecasting track of the Argoverse Challenge. Table 1 is the final leaderboard of Forecasting and shows that our solution achieves 46.70 mAP F and ranks 1 st place in Forecasting. Table 2 is the final leaderboard of Tracking and shows that our solution achieves 56.19 HOTA and ranks 1 st place in Tracking. Table 3 is the final leaderboard of 3D Object Detection and shows that our solution achieves 0.34 CDS and ranks 1 st place in Detection." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We devise a unified framework of detection, tracking, and forecasting for Autonomous Driving. Our solution ranks 1 st place in Detection, Tracking, and Forecasting of the E2E Forecasting track in Argoverse Challenges at CVPR 2023 WAD." } ]
This report presents our Le3DE2E solution for unified sensor-based detection, tracking, and forecasting in Argoverse Challenges at CVPR 2023 Workshop on Autonomous Driving (WAD). We propose a unified network that incorporates three tasks, including detection, tracking, and forecasting. This solution adopts a strong Bird's Eye View (BEV) encoder with spatial and temporal fusion and generates unified representations for multi-tasks. The solution was tested in the Argoverse 2 sensor dataset [8] to evaluate the detection, tracking, and forecasting of 26 object categories. We achieve 1 st place in Detection, Tracking, and Forecasting on the E2E Forecasting track in Argoverse Challenges at CVPR 2023 WAD.
Technical Report for Argoverse Challenges on Unified Sensor-based Detection, Tracking, and Forecasting
[ { "figure_caption": "Figure 1 .1Figure 1. System overview. First, we extract BEV features from LiDAR point cloud and camera images separately. The LiDAR point clouds of the current frame are voxelized and encoded in the BEV feature map by LiDAR backbone. Image features are extracted from synchronized multi-view cameras by an image backbone and are encoded to a camera BEV feature by a transformer-based BEV encoder. Second, spatial-fusion module fuses LiDAR and Camera BEV into a unified BEV representation. The historical frame BEV feature maps are fused with the current frame using a temporal encoder. Third, the spatial-temporal fused BEV is fed into Detector which generates detection bounding boxes. Tracker utilizes object queries from the detector to associate track queries between frames. Furthermore, the Motor Head forecasts the future trajectories for each agent from Detector. In addition, HD Map is encoded into vectors and interacts with agents to help with motion forecasting.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Forecasting Leaderboard on End-to-End Forecasting Challenge frame. The track queries which are matched with the history frame aggregate temporal information in a self-attention module until the agent disappears in a certain time period.", "figure_data": "TeammAP F(↑) ADE(↓) FDE(↓)dgist-cvlab45.834.094.53Host 4626 Team14.515.107.32Le3DE2E (Ours)46.703.223.76", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Tracking Leaderboard on End-to-End Forecasting Challenge", "figure_data": "TeamHOTA(↑) AMOTA(↑) MOTA(↑)AIDrive (v0)44.3617.4732.61dgist-cvlab41.497.8817.97Host 4626 Team39.987.1016.21Le3DE2E (Ours)56.1919.5339.34TeammCDS(↑) mAP(↑) mATE(↓) mASE(↓) mAOE(↓)BEV (BEVFusion)0.370.460.400.300.50Detectors0.340.420.390.300.50AIDrive (Lv0)0.270.350.450.330.84Match (lt3d)0.210.260.430.330.50Host 75088 Team (CenterPoint)0.140.180.490.340.72zgzxy0010.120.150.450.340.65Le3DE2E (Ours)0.390.480.410.310.47", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "3D Object Detection Leaderboard", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" } ]
Zhepeng Wang; Feng Chen; Kanokphan Lertniphonphan; Siwei Chen; Jinyao Bao; Pengfei Zheng; Jinbao Zhang; Kaer Huang; Tao Zhang
[ { "authors": "Jiyang Gao; Chen Sun; Hang Zhao; Yi Shen; Dragomir Anguelov; Congcong Li; Cordelia Schmid", "journal": "", "ref_id": "b0", "title": "Vectornet: Encoding hd maps and agent dynamics from vectorized representation", "year": "2020" }, { "authors": "Yihan Hu; Jiazhi Yang; Li Chen; Keyu Li; Chonghao Sima; Xizhou Zhu; Siqi Chai; Senyao Du; Tianwei Lin; Wenhai Wang", "journal": "", "ref_id": "b1", "title": "Planning-oriented autonomous driving", "year": "2023" }, { "authors": "Zhiqi Li; Wenhai Wang; Hongyang Li; Enze Xie; Chonghao Sima; Tong Lu; Yu Qiao; Jifeng Dai", "journal": "", "ref_id": "b2", "title": "Bevformer: Learning bird's-eye-view representation from multi-camera images via spatiotemporal transformers", "year": "2022" }, { "authors": "Zhijian Liu; Haotian Tang; Alexander Amini; Xingyu Yang; Huizi Mao; Daniela Rus; Song Han", "journal": "", "ref_id": "b3", "title": "Bevfusion: Multitask multi-sensor fusion with unified bird's-eye view representation", "year": "2023" }, { "authors": "Jonathon Luiten; Aljosa Osep; Patrick Dendorfer; H S Philip; Andreas Torr; Laura Geiger; B Leal-Taixé; Leibe", "journal": "International Journal of Computer Vision", "ref_id": "b4", "title": "Hota: A higher order metric for evaluating multi-object tracking", "year": "2020" }, { "authors": "Neehar Peri; Jonathon Luiten; Mengtian Li; Aljovsa Ovsep; Laura Leal-Taix'e; Deva Ramanan", "journal": "", "ref_id": "b5", "title": "Forecasting from lidar via future object detection", "year": "2022" }, { "authors": "Roman Solovyev; Weimin Wang; Tatiana Gabruseva", "journal": "Image and Vision Computing", "ref_id": "b6", "title": "Weighted boxes fusion: Ensembling boxes from different object detection models", "year": "2021" }, { "authors": "Benjamin Wilson; William Qi; Tanmay Agarwal; John Lambert; Jagjeet Singh; Siddhesh Khandelwal; Ratnesh Bowen Pan; Andrew Kumar; Jhony Hartnett; Deva Kaesemodel Pontes; Peter Ramanan; James Carr; Hays", "journal": "", "ref_id": "b7", "title": "Argoverse 2: Next generation datasets for self-driving perception and forecasting", "year": "2021" }, { "authors": "Yan Yan; Yuxing Mao; Bo Li", "journal": "Sensors", "ref_id": "b8", "title": "Second: Sparsely embedded convolutional detection", "year": "2018" }, { "authors": "Fangao Zeng; Bin Dong; Yuang Zhang; Tiancai Wang; Xiangyu Zhang; Yichen Wei", "journal": "", "ref_id": "b9", "title": "Motr: End-to-end multipleobject tracking with transformer", "year": "2022" }, { "authors": "Benjin Zhu; Zhengkai Jiang; Xiangxin Zhou; Zeming Li; Gang Yu", "journal": "", "ref_id": "b10", "title": "Class-balanced grouping and sampling for point cloud 3d object detection", "year": "2019" }, { "authors": "Xizhou Zhu; Weijie Su; Lewei Lu; Bin Li; Xiaogang Wang; Jifeng Dai", "journal": "", "ref_id": "b11", "title": "Deformable detr: Deformable transformers for end-to-end object detection", "year": "2021" } ]
[]
2023-11-27
[ { "figure_ref": [ "fig_2" ], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b3" ], "table_ref": [], "text": "The skin, as the largest organ of our body, assumes the function of protecting our health. However, due to long-term exposure to ultraviolet radiation, melanocytes in the epidermis produce melanin in large quantities. The most serious form of skin cancer is melanoma, which is formed by the accumulation of melanin. According to the World Health Organization [1], about 132,000 people worldwide are affected by skin cancer (melanoma) every year. And according to the American Cancer Society data, the mortality rate of malignant melanoma reaches 7.66% [2]. In particular, if it is diagnosed at a late stage, the survival rate will drop to 25% within 5 years. In addition to this, there exist a number of non-melanoma pigmented skin diseases (blue nevus, nevus of Ota and spitz nevus). Although they do not lead to death, they often develop in the facial area, which can have a major impact on the patient's appearance. In particular, as the quality of life continues to improve, people are placing more emphasis on their appearance.\nTraditional manual diagnostic methods are severely limited by the level of expertise of dermatologists. The accuracy of diagnosis is often unsatisfactory, especially for areas where medical resources are scarce. According to [3], the accuracy of traditional manual diagnosis is between 24% and 77%. And due to the failure of the diagnosis of the disease, it will lead to the malignant melanoma to miss the best time of diagnosis. Currently, computer-assisted diagnosis can greatly improve the diagnosis rate. For remote areas, as long as there is a computer, the level of diagnosis can be greatly improved. Among them, segmentation tasks are an important part of computer-aided diagnosis. However, in the traditional segmentation task, without training on negative samples, disappointing results are often obtained when segmenting regions without lesions. And this will lead to diagnostic errors. However, medical data is scarce. Based on this problem, our work proposes a segmentation task that only requires training in the segmentation task with positive samples, which gives excellent segmentation results and classification results.\nRecently, in natural image segmentation, researchers have proposed a model with higher-order spatial interaction, HorNet [4]. A high-order spatial interaction mechanism is proposed in HorNet, which can extract feature information at a deeper level. The key module gnconv can combine the advantages of convolution and Transformers at the same time, and also has the advantages of efficient, extendable and Translation-equivariant. Compared with ordinary convolution, gnconv is able to combine the spatial location information of lesions with the spatial location information of neighboring regions. Versus Transformers, gnconv extends the traditional self-concerned second-order interaction to an arbitrary order. This greatly improves the performance of the model [4]. However, researchers have not deeply analyzed the differences in extracting features at different orders. One of the main contributions of this paper is to analyze the features extracted by different orders and propose multiple high-order attentional interaction mechanisms.\nIn skin lesion segmentation, lesions usually have more interferences, including skin epidermal shedding, hair interference, low contrast and boundary blurring, etc. In this paper, we propose a multiple high-order attention interaction U-shaped model for skin lesion segmentation. Specifically, a squeeze attention mechanism is added to each of the traditional high-order interactions. This is able to introduce the squeeze attention mechanism to higher interactions, allowing feature extraction to receive further attention at a deeper level. In addition, we use multiple high-order hybrid interactions in each layer in the UNet architecture. Specifically, as shown in Fig. 3, we propose a MHAblock, which is capable of simultaneous 1, 2, 3, 4 and 5 order attention interactions. Based on this, we can determine the presence or absence of a lesion simply based on the explainable features of the 5 different orders in order to avoid subsequent wrong diagnosis. And none of the model training process has negative samples for training. Our contribution is as follows:\n• A Multiple High-order Attention Interaction Module (MHAblcok) is proposed for explainable skin lesion segmentation, combining different levels of feature information obtained from multiple different order interactions, and finally voting to select the best feature information for output. • An explainable segmentation method based on MHAblock is proposed and classification results are derived by explainable inference without the need to learn from negative samples. • An explainable inference classification algorithm (EICA) is proposed. EICA is able to determine the presence or absence of a lesion through explainability and does not introduce additional memory. • A high-order attention interaction module (HAblcok) is proposed. HAblock introduces the squeeze attention mechanism to each order of interaction, allowing feature information to be further attended to at a deeper level. • Our multiple high-order attention interaction mechanism is combined with the UNet architecture to propose the Multiple High-order Attention Interaction U-Shape Model (MHA-UNet). In the skip-connection part, MHA-UNet is successfully combined with Spatial Attention Module (SAB) and Channel Attention Module (CAB) for multilevel and multiscale information fusion." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b6", "b14", "b3" ], "table_ref": [], "text": "Skin lesion segmentation is one of the very important studies in medical image segmentation. Among all types of global diseases, skin diseases have a high incidence and occupy a very large portion of global diseases [5]. Many researchers have proposed many excellent algorithms to improve the accuracy and speed of automatic segmentation. Att-UNet [6] is a model based on the UNet architecture. Att-UNet focuses on the lesion region and suppresses the redundant feature information by adding an attention mechanism to the UNet. In [7], Transformers-based UNet model is proposed. It applies both Transformers and convolution for the extracted features. SCR-Net [8] proposes two key modules (refinement module and calibration module) to enhance the recognition of lesion features. MedT [9] proposed a solution to the problem of Transformers-based network architectures that require a large amount of data samples, which does not require pre-training on a large number of data samples. MedT uses gated axial attention as the main module of the encoder and trains the model with a LoGo strategy. FAT-Net [10] uses the Transformers encoder in the UNet model architecture uses Transformers encoder for the encoder part. The decoder part uses Feature Adaptation Module for multilevel feature fusion. ATTENTION SWIN U-NET [11] is optimized and improved based on Swin U-Net [12], where the cascade operation incorporates an attention mechanism to suppress unimportant information. In [13], a lightweight UNet model architecture is proposed. Lightweighting is achieved by using the fusion of multi-level and multi-scale feature information, which makes it possible to achieve the same performance without requiring a higher number of channels. C 2 SDG [14] proposes a feature comparison enhancement method. It takes the input image and extracts the lower features and performs style enhancement for contrast training. In [15], an adaptive feature calibration method using an attention mechanism in the skip-connection part is proposed. META-Unet [16] proposes to use a combination of both high and low levels of transform attention. The use of two different levels of transformed attention improves the generalization ability of the model. MSA [17] is a general segmentation model based on the Segment Anything Model (SAM) [18] adapted for use in the medical domain. MSA is adapted by adding the Adapter module to the SAM to fit the segmentation in the medical domain.\nIn the above mentioned medical image segmentation models related to skin lesions, all are constructed with Transformers and convolution as the base module. Convolution cannot incorporate global information and Transformers training requires a large sample size. Even though many researchers currently use them in combination [7,15], the respective shortcomings still cannot be eliminated. Currently, the higher-order spatial interaction mechanism proposed by researchers in natural scenes can solve the shortcomings of both of them well [4]. In this paper, we take a further step in the high-order spatial interaction mechanism. We propose a high-order attention interaction mechanism and combine it with multiple high-order synthesized representations to form a multiple high-order attention interaction model (MSA-UNet). In the next subsection, we will elaborate our proposed multiple high-order attention interaction model." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Overall architecture", "publication_ref": [ "b15" ], "table_ref": [], "text": "The overall architecture of the proposed Multiple High-order Attention Interaction Model (MSA-UNet) can be shown in Fig. 1. MSA-UNet adopts a UNet-like architecture consisting of an encoder, a skip-connection part, and a decoder, respectively. The MSA-UNet employs a 5-layer UNet architecture, which is carried out layer by layer to reduce the image size and increase the number of channels. The number of channels is set to [16,32,64,128,256]. Layer 1 uses a standard convolution (convolution kernel of 3) for feature extraction. Layer 2 uses the same standard convolution and a Dropout operation. From layer 2 to layer 5, a Dropout operation is used to prevent model overfitting. Layers 3 through 5 go through a standard convolution, Multiple High-Order Attention Interaction Block (MHAblock), and a Dropout operation, respectively. The ConvMixer operation is used after layer 5 to improve the generalization of the model. The decoder and encoder are kept symmetric, which helps in effective fusion of features. The skip-connection part uses Channel Attention Module (CAB) and Spatial Attention Module (SAB) for multilevel and multiscale fusion. The use of channel attention module and spatial attention module accelerates the model fitting.\nThe key module of the Multiple High-order Attention Interaction model (MSA-UNet) is the Multiple High-order Attention Interaction module (MHAblock). The key module of the Multiple High-order Attention Interaction Module (MHAblock) is the High-order Attention Interaction Module (HA-blcok). In the next subsections, we will focus on these modules." }, { "figure_ref": [ "fig_1" ], "heading": "High-order Attention Interaction Module", "publication_ref": [ "b3" ], "table_ref": [], "text": "Traditional high-order spatial interactions [4] lack focus on features. Specifically, as the interaction order increases with increasing order, the lesion features are gradually lost due to the continuous fusion with the global information. To address this problem, as shown in Fig. 2, we propose the Higher Order Attention Interaction Block (HA-block). The proposed HA-block employs a structure similar to Transformers. The self-attention layer inside Transformers is only capable of second-order interactions. HA-block replaces the self-attention layer with a high-order attention interaction layer (HA). HA mainly consists of squeeze attention, linear projection layer, and global-local filter. HA enables the manipulation of long-term, high-order attention, and high-order spatial interactions. " }, { "figure_ref": [], "heading": "2-order attention interactions", "publication_ref": [], "table_ref": [], "text": "In order to detail the high-order attention interaction mechanism, we start from the 2-order attention interaction operation. The squeeze attention operation was not used for the 1-order interaction. This is due to the requirement of the squeeze attention operation to act on features that are fused with the global-local filter. Assuming x ∈ R HW ×C is an input feature, we can express the 2-order attention interaction operation by the following equation:\nX HW ×C 0 , Y HW ×C0 0 , Y HW ×C1 1 = P ro in (x) ∈ R HW ×2C(1)\nOut 1 = P ro [X 0 * GLF (Y 0 )] ∈ R HW ×C(2)\nOut 2 = P ro out [SA (Out 1 ) * GLF (Y 1 )] ∈ R HW ×C(3)\nwhere P ro in and P ro out are the input and output linear projection layers respectively, the input features are doubled in the number of channels and two features (X 0 and Y 0 ) are formed after passing through P ro in . SA is the squeeze attention module and GLF is the global-local filter. After multiplication and output linear projection layer operation, the 1-order attention interaction operation of two features X 0 and Y 0 can be realized.\nHigh-order attention interactions High-order attention interactions can be generalized by 1-order attention interactions. Suppose x ∈ R HW ×C is an input feature. The input features will output an X 0 and a set of Y features after passing through the linear projection layer with a total number of channels of 2C. The number of channels of each order can be obtained from\nC k = C 2 n-k-1 , 0 ≤ k ≤ n -1.\nSpecifically, the high-order attention interaction operation may be expressed by the following equation:\nX HW ×C0 0 , Y HW ×C0 0 , • • • Y HW ×Cn-1 n-1 = P ro in (x)(4)\nX k+1 = P ro [SA (X k ) * GLF (Y k )] /α (5)\nwhere α is a stabilization factor that stabilizes the training of the model. After each interaction, k is increased by 1, and each order process performs squeeze attention and GLF operations. High-order attention interactions can be performed through the above operations." }, { "figure_ref": [ "fig_2" ], "heading": "Squeeze Attention", "publication_ref": [ "b18", "b3", "b19" ], "table_ref": [], "text": "In [19], researchers designed squeeze attention module for segmentation task based on the segmentation task characteristics. In segmentation task, convolution extracts features by localizing near each pixel. While in the global image attention process, the focus objects in the feature map can be activated by operations such as global pooling, convolution and upsampling. SA is able to select the focus feature objects by weighting both globally and locally, which is extremely suitable for the segmentation task. Therefore, we propose for the first time to introduce the squeeze attention mechanism into higher-level interactions. The feature information will be learned more profoundly through the higher order level of squeeze and attention operations. Specifically, as in Fig. 3 there is a structural diagram of squeeze attention. The specific operation can be expressed from the following equation:\nÔatt = Conv att [AP (x); θ att , Ω att ](6)\nO att = U p Ôatt (7) O = O att * X res + O att (8)\nwhere Conv att is the attention convolution operation, parameterized by the attention factor θ att and the convolutional layer structure Ω att . AP is the average pooling layer, U p is the upsampling operation, and X res is the output of the main convolutional channel.\nGlobal-Local Filter Global-Local Filter (GLF) was proposed in [4]. Earlier, the global filter (GF) was proposed in [20].\nThe GF is added to the learning of frequency domain features by means of learnable parameters. The global-local filter was proposed to improve the global filter. The GLF extracts local features by feeding half the number of channels to a standard convolution (kernel of 3). The locally extracted features are then concatenated with the features learned in the global frequency domain. The frequency domain features are learned using mainly 2D Discrete Fourier Transform (2D FFT) and 2D Discrete Inverse Fourier Transform (2D IFFT). The frequency domain feature learning can learn spatial location information at the frequency domain level. And the inclusion of local feature learning can reduce the loss of lesion feature information when the model is increasing in the interaction order. The choice of global filtering in the frequency domain is due to the fact that filters learned in the frequency domain are more clearly expressed than in the spatial domain [HorNet]. The process of learning in the frequency domain can be expressed by the following:\nF[f ] : F (u, v) = 1 HW H-1 x=0 W -1 y=0 f (x, y)e -/2π(kx/H+vy/W )(9)\nF -1 [F ] : f (x, y) = H-1 u=0 W -1 v=0 F (u, v)e j2π(ux/H+vy/W )(10)\nF out = F -1 [V * F[f (x, y)]](11)\nwhere HW denotes the image size H × W , f (x, y) is the input two-dimensional image information, x and y denote the spatial domain image variables, u and v denote the frequency variables in the frequency domain, F denotes the The high-order attention interaction mechanism formed by combining the global-local filter with squeeze attention is one of the important ideas we propose. Squeeze attention incorporates global properties after frequency-domain filtering for feature attention and introduces squeeze attention to higher-order attentional interactions via higher-order properties. Express the higher order attention interaction operation from the aspect of frequency domain filtering as:\nΓ k o = O k * Cat[Conv(F k out ), L](12)\nX k+1 = P ro(Γ k o )/α(13)\nwhere Γ k o is the feature of the k-order attention interaction, Cat is the Concat operation, Conv is the standard convolution (convolution kernel of 1) operation, and L is the local feature learning in the global-local filter (standard convolution with convolution kernel of 3)." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Multiple High-order Attention Interaction", "publication_ref": [ "b12" ], "table_ref": [], "text": "Skin lesion characteristics are diverse and highly intrusive. Conventional higher order interactions simply use only individual order interactions. We found that different orders extract features with different levels of information of the features. As shown in Fig. 3 is our proposed Multiple High-order Attention Interaction Block (MHAblock).\nWe visualized graphs of the results of different orders of attention interactions for MHAblock. The 1-order, 2-order, 3-order, 4-order and 5-order extract features with different levels of information respectively. 1-order extracts features with information about the location of the upper part of the lesion. 2-order extracts features with information about the boundary features on the lower right side of the lesion. 3-order extracts features with information about the overall outline of the lesion. 4-order extracts features with information about the boundary features on the upper right side of the lesion. 5-order extracts features with information about the boundary features on the left side of the lesion. It is not a coincidence that although we visualized only one sample of skin lesions, we found that all samples were the result of such feature extraction. Traditional high-order interaction methods use only one order extraction. We propose a multiple high-order interaction approach by further analyzing the high-order interaction properties in depth. At the same time, the multiple high-order attention interaction module is proposed in combination with the proposed high-order attention interaction module. As shown in Fig. 3, we combine the interaction results of 5 orders for Concat connected, and the number of channels becomes 5 times. The most suitable feature information is filtered by a Vote-block, and the number of channels returns to the original. Specifically, the concrete composition of the multiple higher-order attention interaction module can be expressed by the following equation:\nX = EA(x)(14)\nx\n1 x 2 x 3 x 4 x 5          =          HA 1 (X) HA 2 (X) HA 3 (X) HA 4 (X) HA 5 (X)(15)\nC 0 = Concat (x 1 , x 2 , x 3 , x 4 , x 5 ) . (16\n) Out = x * V b (C 0 ) + x (17\n)\nwhere EA is Inverted External Attention Block (IEAB) [13], and IEAB can improve the model generalization ability. HA 1 , HA 2 , HA 3 , HA 4 , and HA 5 denote the 1-order, 2-order, 3-order, 4-order and 5-order attention interaction operations, respectively. V b is Vote-block, and the specific Vote-block can be expressed by the following equation:\nA = SA(x)(18)\nOut = Sig{BN [Conv(x)]} (19\n)\nwhere SA is the squeeze attention module, Conv is the standard convolution with a convolution kernel of 1, BN is the batch normalization operation, and Sig is the Sigmoid activation function." }, { "figure_ref": [ "fig_3", "fig_3", "fig_3" ], "heading": "Explainable Inference Classification Algorithm (EICA) Design", "publication_ref": [], "table_ref": [], "text": "As known from Fig. 4, the HA-blocks of different orders of attention interactions have very distinctive features. For this reason, we design the Explanatory Inference Classification Algorithm (EICA) based on this strong feature. EICA is capable of determining the presence or absence of a lesion without the involvement of negative samples by the interpretability of attention interactions of different orders. EICA consists of strongly explainable mathematical formulas as shown in Fig. 4. EICA requires no training, is fast, and has a very low memory usage, and accessing the segmentation model takes up almost no additional memory.\nThe specific algorithm design is based on the characteristics of attention interactions of different orders. The image is divided into four quadrants, and the decision is made according to the learning prediction characteristics of 1-order, 2-order, 4-order and 5-order corresponding to the position where the feature information appears. As shown in Fig. 4, only if all four conditions are satisfied, we can determine it as the lesion exists (output is 1), otherwise output lesion does not exist (output is 0).\nThe proposed EICA has been integrated into the segmentation model to achieve end-to-end output classification and segmentation results under zero-negative sample training, and the classification task occupies almost no additional memory." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b20", "b21", "b22", "b23", "b14", "b24", "b24" ], "table_ref": [], "text": "We used three public data on skin lesions (ISIC2017 [21], ISIC2018 [22,23] and PH 2 [24]) and our clinical dataset. In particular, PH 2 and our clinical dataset are used as external validation to explore the generalization ability of the model due to their small sample size. The weights used for external validation were obtained after ISIC2017 training.\nISIC2017 and ISIC2018 acquire 2000 images and 2594 images, and the corresponding labels with segmentation masks, respectively. According to the settings of [15], for the ISIC2017 dataset we used 1250 images for training, 150 images for validation and 600 images for testing. For the ISIC2018 dataset we used 1815 images for training set, 259 images for validation set and 520 images for testing set. The specific preprocessing was done as per [25] and the images were all resized to 256×256 pixels.\nThe PH 2 dataset has a total of 200 images, and labels with segmentation masks. The image size is 768×560 pixels. Due to the small number of 200 images, we use them as external validation. The specific preprocessing follows [25] and the images are all resized to 256×256 pixels.\nOur clinically acquired dataset was obtained from Ruijin Hospital, Shanghai Jiaotong University School of Medicine for 39 cases of skin lesions. Specifically, the clinical data were acquired by PhotoMax Pro Analyzer system using a magnification of 30×. Three dermatologists with extensive experience performed the annotation of the true values. Due to the small sample size of the clinical data, we used it as an external validation. Again, all images were resized to 256×256 pixels.\nNormalSkin * dataset and a publicly available dataset inside Kaggle † (denoted as Kaggle95 below) are both normal human skin images. They are mainly used to validate the negative detection rate of explainable inference classification trained without negative samples. After removing data with large viewpoints from both datasets, 200 and 95 normal skin images were obtained, respectively. Again, all images were resized to 256×256 pixels." }, { "figure_ref": [], "heading": "Implementation details", "publication_ref": [ "b12", "b25" ], "table_ref": [], "text": "All experiments are implemented on a single NVIDIA GeForce RTX 4080 Laptop GPU with 12GB of memory. And the experiments were realized based on Python 3.8 and Pytorch 1.12.0. For the training data, we used data augmentation (random rotation, vertical flip and horizontal flip) operations. BceDice [13] was used for the loss function. The training epoch was set to 250 and the batch size was 8. The optimizer used AdamW [26]. The learning rate was minimized to 0.00001 with an initial value of 0.001 and a cosine annealed learning rate scheduler was used." }, { "figure_ref": [], "heading": "Metrics", "publication_ref": [], "table_ref": [], "text": "Dice Score Coefficient (DSC), sensitivity (SE), specificity (SP) and accuracy (ACC) are the most commonly used evaluation criteria in the field of medical image segmentation. DSC is mainly used for evaluating the similarity between predicted and true values. SE mainly evaluates the percentage of true positives (TP) among true positives (TP) and false negatives (FN). SP mainly evaluates the measure of the percentage of false positives (FP) among percentage of true negatives (TN) and false positives (FP). ACC is used to evaluate the proportion occupied by true positives (TP) and true negatives (TN). In addition, the positive detection rate (PDR) and negative detection rate (NDR) in the classification task are consistent with the definitions of sensitivity (SE) and specificity (SP). It should be noted that PDR and NDR are in the form of percentages.\nDSC = 2TP 2TP + FP + FN(20)" }, { "figure_ref": [], "heading": "ACC = TP + TN TP + TN + FP + FN (21)", "publication_ref": [ "b21", "b22" ], "table_ref": [], "text": "Sensitivity/P DR = TP TP + FN (22) Specif icity/N DR = TN TN + FP (23) where TP denotes true positive, TN denotes true negative, FP denotes false positive and FN denotes false negative." }, { "figure_ref": [ "fig_4", "fig_4" ], "heading": "Comparison results", "publication_ref": [ "b26", "b5", "b6", "b8", "b9", "b14", "b12", "b10", "b7", "b16", "b3", "b13", "b15", "b26", "b5", "b6", "b8", "b9", "b14", "b12", "b10", "b7", "b16", "b3", "b13", "b15" ], "table_ref": [ "tab_0", "tab_2" ], "text": "In order to fully confirm the validity of our proposed method, we compared our experimental results with 13 medical segmentation models. In addition, we conducted external validation experiments with 8 of the most advanced models available.\nTable 1 and 2 show the experimental results with 13 medical segmentation models on ISIC2017 and ISIC2018 datasets. We conclude from the table that the DSC value of the proposed MHA-UNet is 12.32% and 6.20% higher than the traditional UNet model on both data. The DSC values are 2.12% and 2.78% higher when compared with the currently popular Segment Anything Model in the medical field (MSA). In particular, the DSC values are also significantly higher when comparing with the traditional High order interaction model (HorNet) with a single order, respectively. Fig. 5 shows the visualization result graph, and Fig. 5(A) shows the middle layer of the proposed MHAblock specific segmentation. We can see the model segmentation process very soberly, what kind of work is carried out in each order. This presents a powerful and explainable analysis for skin lesion segmentation. We fused multiple high-order attention interactions to get the best performance results with very clear interpretability, which is a difficult effect to achieve with other current models.\nTable 3 and 4 show the results of experiments with 8 state-of-the-art medical segmentation models externally validated on PH 2 and our clinical dataset. We can conclude from the table that our generalization performance has a more pronounced effect. On both data, the DSC values of our model are 4.03% and 1.36% higher, respectively, when compared to the traditional high-order interaction model with a single order (HorNet). In the generalization experiments our model also has a strong explanatory segmentation process and the most accurate segmentation results. Methods DSC SE SP ACC U-Net [27] 0.8159 0.8172 0.9680 0.9164 Att U-Net [6] 0.8082 0.7998 0.9776 0.9145 TransUNet [7] 0.8123 0.8263 0.9577 0.9207 MedT [9] 0.8037 0.8064 0.9546 0.9090 FAT-Net [10] 0.8500 0.8392 0.9725 0.9326 TransNorm [15] 0.8933 0.8532 0.9859 0.9582 MALUNet [13] 0.8896 0.8824 0.9762 0.9583 ATTENTION SWIN U-NET [11] 0.8859 0.8492 0.9847 0.9591 SCR-Net [8] 0.8898 0.8497 0.9853 0.9588 MSA [17] 0.8974 0.9200 0.9824 0.9604 HorNet [4] 0.9063 0.9151 0.9746 0.9630 C 2 SDG [14] 0.8938 0.8859 0.9765 0.9588 META-Unet [16] 0.9068 0.8801 0.9836 0.9639 MHA-UNet(Our) 0.9165 0.8979 0.9870 0.9680 Methods DSC SE SP ACC U-Net [27] 0.8545 0.8800 0.9697 0.9404 Att U-Net [6] 0.8566 0.8674 0.9863 0.9376 TransUNet [7] 0.8499 0.8578 0.9653 0.9452 MedT [9] 0.8389 0.8252 0.9637 0.9358 FAT-Net [10] 0.8903 0.9100 0.9699 0.9578 TransNorm [15] 0.8951 0.8750 0.9790 0.9580 MALUNet [13] 0.8931 0.8890 0.9725 0.9548 ATTENTION SWIN U-NET [11] 0.8540 0.8057 0.9826 0.9480 SCR-Net [8] 0.8886 0.8892 0.9714 0.9547 MSA [17] 0.8829 0.9199 0.9745 0.9617 HorNet [4] 0.9020 0.9212 0.9645 0.9596 C 2 SDG [14] 0.8806 0.8970 0.9643 0.9506 META-Unet [16] 0.8899 0.8909 0.9716 0.9552 MHA-UNet(Our) 0.9074 0.9149 0.9649 0.9680" }, { "figure_ref": [ "fig_5", "fig_2" ], "heading": "Ablation study", "publication_ref": [ "b3", "b14", "b12", "b10", "b7", "b16", "b3", "b13", "b15", "b14", "b12", "b10", "b7", "b16", "b3", "b13", "b15", "b0", "b1", "b2", "b3", "b4" ], "table_ref": [ "tab_3", "tab_3" ], "text": "To further validate the effectiveness of our proposed Multiple High-order Attention Interaction Block (MHAblock), we conducted ablation experiments. The specific setup of the experiments is shown in Table 5. We replace the high-order attention interaction module with a single order in the encoder and decoder of the proposed MHA-UNet model, respectively. For the choice of single order we use the optimal interaction order proposed by HorNet [4]. As shown in Table 5, setting A indicates replacing the MHAblock of the encoder with a single order interaction. Setting B indicates replacing the MHAblock of the decoder with a single order interaction. Setting A + B indicates replacing both the encoder and decoder MHAblock with a single order. The experimental results demonstrate that all evaluation metrics decrease when replacing either the encoder or the decoder with a single order interaction. In particular, the performance is worst in the case of simultaneous encoder and decoder replacement. This ablation experiment further demonstrates the effectiveness of the proposed Multiple High-order Attention Interaction Block (MHAblock).\nIn order to further demonstrate that the fusion of different orders can effectively improve the performance of the model, we conducted ablation experiments. The specific settings of the experiment are shown in Fig. 6. We set up 10 different settings respectively. We found an important conclusion that the 3-order attention interaction operation has a relatively large impact. In setup 2, adding the 3-order interaction gives a fast performance improvement. Whereas, Methods DSC SE SP ACC TransNorm [15] 0.8952 0.9116 0.9328 0.9261 MALUNet [13] 0.8865 0.8922 0.9425 0.9263 ATTENTION SWIN U-NET [11] 0.8850 0.8886 0.9363 0.9213 SCR-Net [8] 0.8989 0.9114 0.9446 0.9339 MSA [17] 0.9096 0.9623 0.9382 0.9401 HorNet [4] 0.8894 0.9567 0.9073 0.9232 C 2 SDG [14] 0.9030 0.9137 0.9476 0.9367 META-Unet [16] 0.8998 0.9020 0.9510 0.9352 MHA-UNet(Our) 0.9253 0.9539 0.9486 0.9503 in setup 10, removing the 3-order interaction showed a significant decrease in performance. This finding is further explained in the explainable process as well. As shown in Fig. 3, the 3-order interaction is an extraction of the overall location of the lesion, so the 3-order interaction plays a key role. Moreover, in setup 4, the optimal performance was Table 4: External validation performance comparisons with 8 state-of-the-art models on our clinical dataset.\nMethods DSC SE SP ACC TransNorm [15] 0.8436 0.8015 0.9752 0.9406 MALUNet [13] 0.8394 0.8605 0.9507 0.9321 ATTENTION SWIN U-NET [11] 0.8258 0.8160 0.9573 0.9291 SCR-Net [8] 0.8446 0.8122 0.9678 0.9314 MSA [17] 0.8543 0.9162 0.9520 0.9436 HorNet [4] 0.8660 0.9626 0.9324 0.9386 C 2 SDG [14] 0.8542 0.8460 0.9650 0.9405 META-Unet [16] 0.8569 0.8498 0.9643 0.9325 MHA-UNet(Our) 0.8778 0.8874 0.9650 0.9490 obtained by performing the setup that fused the [1,2,3,4,5] orders. This ablation experiment combined with explainable segmentation also reflects that different orders have different degrees of influence." }, { "figure_ref": [ "fig_6", "fig_7", "fig_7" ], "heading": "Discussion and explainable analysis", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "In contrast to natural images, skin lesions usually have more disturbances, including skin epidermal shedding, hair interference, low contrast, and blurred boundaries, among others. The traditional higher-order interaction mechanism is difficult to learn the lesion features better, which is due to the lack of attention to the focus location. In this paper, we propose a high-order attention interaction mechanism. We introduce squeeze attention into the high-order interaction mechanism, and also make squeeze attention to pay attention to features in multiple high-levels. Moreover, to the best of our knowledge, we are the first to explain the mechanism of different order interaction mechanisms with explainable skin lesion segmentation. The proposed Multiple High-order Attention Interaction Block (MHAblock) that combines the different roles of each order for deep feature extraction.\nTo further exemplify the skin lesion segmentation under interpretability, we visualize the results of different orders in the last MHAblock of the decoder. As shown in Fig. 7, the visualization graph we have performed some processing to display the prediction graph in combination with the order result graph, which helps to understand the role of each order interaction more clearly. As we can see from the figure, 1-order has a stronger sensitivity to the top feature information of the skin lesion, 2-order has a stronger perception of the lower right boundary information, 3-order is the most critical interaction order, which has the deepest learning of the overall feature information of the skin lesion, and 4-order has a stronger perception of the upper right boundary information. the current state-of-the-art models in skin lesion segmentation. In addition, the improvement in generalization ability is most exciting.\nThe positive case samples fully demonstrate the strong interpretability of our model. In addition, we intercepted negative samples without the presence of lesions in the ISIC2017 dataset as a test. As Fig. 8 shows the visualization of the interpretability obtained from our test on negative samples. In particular, we circled the highlighted areas of 1-order and 4-order in red in Fig. 8. The 1-order and 4-order are performed as upper feature and upper right feature, respectively. And in the explainable analysis of the negative samples, we clearly see the wrong highlighted regions. This indicates the inability of the model to determine the location of the lesion in the inference process for the negative sample. In contrast to the positive samples, none of the visualized interpretations of the negative samples are compliant, with spurious highlighted regions. The reliability of our interpretable method can be demonstrated from both positive and negative samples.\nMoreover, according to this strong interpretability, we perform classification under almost no introduction of additional memory by means of the proposed Explainable Inference Classification Algorithm (EICA). The adopted weights are trained on the segmentation task performed only with 1250 positive samples (ISIC2017 training set). As shown in Table 6 are the results of our experiments, the positive detection rate on PH 2 and our clinical dataset reaches about 80%.\nWhile the negative detection rate on NormalSkin and Kaggle95 tests is 78.5% and 83.2%, respectively. This result demonstrates the reliability of our interpretability through quantitative aspects.\nAlthough our results and interpretable analyses are exciting, we have only studied them in skin lesion segmentation experiments. In the future, the use of multiple higher-order attentional interaction mechanisms for diagnostic studies of a wider range of diseases will be an important direction. In addition, we used a small sample size of external clinical data.\nAlthough we only used it as an external validation, more clinical data for experiments would be a more comprehensive and accurate representation of the performance of the model. In the future, the collection of clinical samples is also an important direction. In addition, our proposed explainable inference classification algorithm (EICA) has considerable space for improvement. In the future, the classification accuracy can be effectively improved by designing a more appropriate EICA." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this study, we propose high-order attention interaction mechanisms that introduce squeeze attention to multiple highlevel feature attention. In addition, to the best of our knowledge, we are the first to explain the mechanism of different order interaction mechanisms. Meanwhile, we propose the multiple high-order attention interaction block (MHAblock) by fusing the features of different orders. The MHAblock is introduced into the UNet architecture and the MHA-UNet model is proposed. In addition, our method has very high interpretability in the skin lesion segmentation task, and our strong interpretability is further demonstrated from both positive and negative case samples. Comparison experiments of MHA-UNet with 13 medical segmentation models and external validation experiments with 8 state-of-the-art models demonstrate the superiority of our proposed method, along with its strong interpretability. In the future, applying the proposed method to the diagnosis of more diseases and increasing clinical sample collection will be a key direction. In addition, the existence of a large space of improvement for the Explainable Inference Classification Algorithm (EICA) is also a very interesting direction." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This work was supported partly by Medical-Industrial Intersection Program of University of Shanghai for Science and Technology (2023LXY-RUIJINO1Z)." } ]
Computer-aided diagnosis of skin diseases is an important tool. However, the interpretability of computer-aided diagnosis is currently poor. Dermatologists and patients cannot intuitively understand the learning and prediction process of neural networks, which will lead to a decrease in the credibility of computer-aided diagnosis. In addition, traditional methods need to be trained using negative samples in order to predict the presence or absence of a lesion, but medical data is often in short supply. In this paper, we propose a multiple high-order attention interaction model (MHA-UNet) for use in a highly explainable skin lesion segmentation task. MHA-UNet is able to obtain the presence or absence of a lesion by explainable reasoning without the need for training on negative samples. Specifically, we propose a high-order attention interaction mechanism that introduces squeeze attention to a higher level for feature attention. In addition, a multiple high-order attention interaction (MHAblock) module is proposed by combining the different features of different orders. For classifying the presence or absence of lesions, we conducted classification experiments on several publicly available datasets in the absence of negative samples, based on explainable reasoning about the interaction of 5 attention orders of MHAblock. The highest positive detection rate obtained from the experiments was 81.0% and the highest negative detection rate was 83.5%. For segmentation experiments, comparison experiments of the proposed method with 13 medical segmentation models and external validation experiments with 8 state-of-the-art models in three public datasets and our clinical dataset demonstrate the state-of-the-art performance of our model. The code is available from https://github.com/wurenkai/MHA-UNet.
ONLY POSITIVE CASES: 5-FOLD HIGH-ORDER ATTENTION INTERACTION MODEL FOR SKIN SEGMENTATION DERIVED CLASSIFICATION
[ { "figure_caption": "Figure 1 :1Figure 1: The MHA-UNet model architecture proposed in this paper.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: (a)The HA-block structure proposed in this paper. (b)The HA structure proposed in this paper. Different colored sections represent different orders of attention interactions.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The MHAblock structure proposed in this paper.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Schematic diagram of the Explainable Inference Classification Algorithm (EICA) proposed in this paper.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: (A)Visualization of the results of the individual modules in the last MHAblock of the decoder. (B)Visualization of segmentation results from multiple medical segmentation models and our model in ISIC2017 and ISIC2018. Red contours indicate ground truth and blue contours indicate model predicted segmentation lines.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Ablation experiments on the fusion of different attention interaction orders in MHAblock.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Visualization plots of the characteristics of each attention interaction order in MHAblock.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: MHAblock-based explainable analysis of negative samples without lesions.", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Performance comparison with 13 medical segmentation models on ISIC 2017 dataset.", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Performance comparison with 13 medical segmentation models on ISIC 2018 dataset.", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "External validation performance comparison with 8 state-of-the-art models on the PH 2 dataset.", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation experiments on the effect of multiple orders and single orders on performance in MHA-UNet.", "figure_data": "SetupDSCSESPACCMHA-UNet(baseline) 0.9165 0.8979 0.9870 0.9680A0.9145 0.9086 0.9809 0.9667B0.9130 0.8899 0.9855 0.9668A+B0.9113 0.8826 0.9867 0.9664", "figure_id": "tab_3", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Experimental results for positive and negative categorization of multiple public datasets.", "figure_data": "DatasetsPDRNDRPH 281.0%-Our clinical datasets 79.5%-NormalSkin-78.5%Kaggle95-83.2%", "figure_id": "tab_4", "figure_label": "6", "figure_type": "table" } ]
Renkai Wu; Yinghao Liu; Qing Chang
[ { "authors": "Md Al; Mamun ; Mohammad Shorif Uddin", "journal": "International Journal of Healthcare Information Systems and Informatics (IJHISI)", "ref_id": "b0", "title": "A survey on a skin disease detection system", "year": "2021" }, { "authors": "Md Kamrul Hasan; Md Asif Ahamad; Choon Hwai Yap; Guang Yang", "journal": "Computers in Biology and Medicine", "ref_id": "b1", "title": "A survey, review, and future trends of skin lesion segmentation and classification", "year": "2023" }, { "authors": "Hue Tran; Keng Chen; Adrian C Lim; James Jabbour; Stephen Shumack", "journal": "Australasian journal of dermatology", "ref_id": "b2", "title": "Assessing diagnostic skill in dermatology: a comparison between general practitioners and dermatologists", "year": "2005" }, { "authors": "Yongming Rao; Wenliang Zhao; Yansong Tang; Jie Zhou; Ser ; Nam Lim; Jiwen Lu", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b3", "title": "Hornet: Efficient high-order spatial interactions with recursive gated convolutions", "year": "2022" }, { "authors": "J Roderick; Nicole E Hay; Johns; C Hywel; Ian W Williams; Robert P Bolliger; David J Dellavalle; Robin Margolis; Luigi Marks; Martin A Naldi; Sarah K Weinstock; Wulf", "journal": "Journal of Investigative Dermatology", "ref_id": "b4", "title": "The global burden of skin disease in 2010: an analysis of the prevalence and impact of skin conditions", "year": "2014" }, { "authors": "Ozan Oktay; Jo Schlemper; Le Loic; Matthew Folgoc; Mattias Lee; Kazunari Heinrich; Kensaku Misawa; Steven Mori; Nils Y Mcdonagh; Bernhard Hammerla; Kainz", "journal": "", "ref_id": "b5", "title": "Attention u-net: Learning where to look for the pancreas", "year": "2018" }, { "authors": "Jieneng Chen; Yongyi Lu; Qihang Yu; Xiangde Luo; Ehsan Adeli; Yan Wang; Le Lu; Alan L Yuille; Yuyin Zhou", "journal": "", "ref_id": "b6", "title": "Transunet: Transformers make strong encoders for medical image segmentation", "year": "2021" }, { "authors": "Huisi Wu; Jiafu Zhong; Wei Wang; Zhenkun Wen; Jing Qin", "journal": "", "ref_id": "b7", "title": "Precise yet efficient semantic calibration and refinement in convnets for real-time polyp segmentation from colonoscopy videos", "year": "2021" }, { "authors": "Maryam Asadi-Aghbolaghi; Reza Azad; Mahmood Fathy; Sergio Escalera", "journal": "", "ref_id": "b8", "title": "Multi-level context gating of embedded collective knowledge for medical image segmentation", "year": "2020" }, { "authors": "Huisi Wu; Shihuai Chen; Guilian Chen; Wei Wang; Baiying Lei; Zhenkun Wen", "journal": "Medical image analysis", "ref_id": "b9", "title": "Fat-net: Feature adaptive transformers for automated skin lesion segmentation", "year": "2022" }, { "authors": "Ehsan Khodapanah Aghdam; Reza Azad; Maral Zarvani; Dorit Merhof", "journal": "IEEE", "ref_id": "b10", "title": "Attention swin u-net: Cross-contextual attention mechanism for skin lesion segmentation", "year": "2023" }, { "authors": "Yueyue Hu Cao; Joy Wang; Dongsheng Chen; Xiaopeng Jiang; Qi Zhang; Manning Tian; Wang", "journal": "Springer", "ref_id": "b11", "title": "Swin-unet: Unet-like pure transformer for medical image segmentation", "year": "2022" }, { "authors": "Jiacheng Ruan; Suncheng Xiang; Mingye Xie; Ting Liu; Yuzhuo Fu", "journal": "IEEE", "ref_id": "b12", "title": "Malunet: A multi-attention and light-weight unet for skin lesion segmentation", "year": "2022" }, { "authors": "Ran Gu; Guotai Wang; Jiangshan Lu; Jingyang Zhang; Wenhui Lei; Yinan Chen; Wenjun Liao; Shichuan Zhang; Kang Li; Dimitris N Metaxas", "journal": "Medical Image Analysis", "ref_id": "b13", "title": "Cddsa: Contrastive domain disentanglement and style augmentation for generalizable medical image segmentation", "year": "2023" }, { "authors": "Reza Azad; Mohammad T Al-Antary; Moein Heidari; Dorit Merhof", "journal": "IEEE Access", "ref_id": "b14", "title": "Transnorm: Transformer provides a strong spatial normalization mechanism for a deep segmentation model", "year": "2022" }, { "authors": "Huisi Wu; Zebin Zhao; Zhaoze Wang", "journal": "IEEE Transactions on Automation Science and Engineering", "ref_id": "b15", "title": "Meta-unet: Multi-scale efficient transformer attention unet for fast and high-accuracy polyp segmentation", "year": "2023" }, { "authors": "Junde Wu; Rao Fu; Huihui Fang; Yuanpei Liu; Zhaowei Wang; Yanwu Xu; Yueming Jin; Tal Arbel", "journal": "", "ref_id": "b16", "title": "Medical sam adapter: Adapting segment anything model for medical image segmentation", "year": "2023" }, { "authors": "Alexander Kirillov; Eric Mintun; Nikhila Ravi; Hanzi Mao; Chloe Rolland; Laura Gustafson; Tete Xiao; Spencer Whitehead; Alexander C Berg; Wan-Yen Lo", "journal": "", "ref_id": "b17", "title": "Segment anything", "year": "2023" }, { "authors": "Zilong Zhong; Zhong Qiu Lin; Rene Bidart; Xiaodan Hu; Ibrahim Ben Daya; Zhifeng Li; Wei-Shi Zheng; Jonathan Li; Alexander Wong", "journal": "", "ref_id": "b18", "title": "Squeeze-and-attention networks for semantic segmentation", "year": "2020" }, { "authors": "Yongming Rao; Wenliang Zhao; Zheng Zhu; Jiwen Lu; Jie Zhou", "journal": "Advances in neural information processing systems", "ref_id": "b19", "title": "Global filter networks for image classification", "year": "2021" }, { "authors": "David Noel Cf Codella; M Emre Gutman; Brian Celebi; Michael A Helba; Stephen W Marchetti; Aadi Dusza; Konstantinos Kalloo; Nabin Liopyris; Harald Mishra; Kittler", "journal": "IEEE", "ref_id": "b20", "title": "Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (isbi), hosted by the international skin imaging collaboration (isic)", "year": "2018" }, { "authors": "Noel Codella; Veronica Rotemberg; Philipp Tschandl; M Emre Celebi; Stephen Dusza; David Gutman; Brian Helba; Aadi Kalloo; Konstantinos Liopyris; Michael Marchetti", "journal": "", "ref_id": "b21", "title": "Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration", "year": "2019" }, { "authors": "Philipp Tschandl; Cliff Rosendahl; Harald Kittler", "journal": "Scientific data", "ref_id": "b22", "title": "The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions", "year": "2018" }, { "authors": "Teresa Mendonça; Pedro M Ferreira; Jorge S Marques; André Rs Marcal; Jorge Rozeira", "journal": "IEEE", "ref_id": "b23", "title": "Ph 2-a dermoscopic image database for research and benchmarking", "year": "2013" }, { "authors": "Md Zahangir Alom; Mahmudul Hasan; Chris Yakopcic; M Tarek; Taha; Vijayan; Asari", "journal": "", "ref_id": "b24", "title": "Recurrent residual convolutional neural network based on u-net (r2u-net) for medical image segmentation", "year": "2018" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b25", "title": "Decoupled weight decay regularization", "year": "2017" }, { "authors": "Olaf Ronneberger; Philipp Fischer; Thomas Brox", "journal": "Springer", "ref_id": "b26", "title": "U-net: Convolutional networks for biomedical image segmentation", "year": "2015" } ]
[ { "formula_coordinates": [ 4, 190.71, 439.32, 349.96, 13.2 ], "formula_id": "formula_0", "formula_text": "X HW ×C 0 , Y HW ×C0 0 , Y HW ×C1 1 = P ro in (x) ∈ R HW ×2C(1)" }, { "formula_coordinates": [ 4, 220.19, 459.67, 320.48, 11.72 ], "formula_id": "formula_1", "formula_text": "Out 1 = P ro [X 0 * GLF (Y 0 )] ∈ R HW ×C(2)" }, { "formula_coordinates": [ 4, 197.87, 475.73, 342.8, 11.72 ], "formula_id": "formula_2", "formula_text": "Out 2 = P ro out [SA (Out 1 ) * GLF (Y 1 )] ∈ R HW ×C(3)" }, { "formula_coordinates": [ 4, 144.31, 572.36, 123.81, 13.6 ], "formula_id": "formula_3", "formula_text": "C k = C 2 n-k-1 , 0 ≤ k ≤ n -1." }, { "formula_coordinates": [ 4, 203.42, 603.02, 337.24, 14.03 ], "formula_id": "formula_4", "formula_text": "X HW ×C0 0 , Y HW ×C0 0 , • • • Y HW ×Cn-1 n-1 = P ro in (x)(4)" }, { "formula_coordinates": [ 4, 224.61, 627.66, 316.06, 9.65 ], "formula_id": "formula_5", "formula_text": "X k+1 = P ro [SA (X k ) * GLF (Y k )] /α (5)" }, { "formula_coordinates": [ 5, 236.11, 394.38, 304.55, 12.17 ], "formula_id": "formula_6", "formula_text": "Ôatt = Conv att [AP (x); θ att , Ω att ](6)" }, { "formula_coordinates": [ 5, 256.09, 417.01, 284.58, 31.3 ], "formula_id": "formula_7", "formula_text": "O att = U p Ôatt (7) O = O att * X res + O att (8)" }, { "formula_coordinates": [ 5, 184.84, 608.58, 355.83, 30.2 ], "formula_id": "formula_8", "formula_text": "F[f ] : F (u, v) = 1 HW H-1 x=0 W -1 y=0 f (x, y)e -/2π(kx/H+vy/W )(9)" }, { "formula_coordinates": [ 5, 193.69, 649.47, 346.98, 30.2 ], "formula_id": "formula_9", "formula_text": "F -1 [F ] : f (x, y) = H-1 u=0 W -1 v=0 F (u, v)e j2π(ux/H+vy/W )(10)" }, { "formula_coordinates": [ 5, 248.35, 684.89, 292.32, 11.72 ], "formula_id": "formula_10", "formula_text": "F out = F -1 [V * F[f (x, y)]](11)" }, { "formula_coordinates": [ 6, 240.27, 490.04, 300.4, 12.69 ], "formula_id": "formula_11", "formula_text": "Γ k o = O k * Cat[Conv(F k out ), L](12)" }, { "formula_coordinates": [ 6, 263.27, 511.66, 277.4, 12.69 ], "formula_id": "formula_12", "formula_text": "X k+1 = P ro(Γ k o )/α(13)" }, { "formula_coordinates": [ 7, 280.42, 397.49, 260.25, 8.96 ], "formula_id": "formula_13", "formula_text": "X = EA(x)(14)" }, { "formula_coordinates": [ 7, 261.53, 412.05, 279.14, 54.14 ], "formula_id": "formula_14", "formula_text": "1 x 2 x 3 x 4 x 5          =          HA 1 (X) HA 2 (X) HA 3 (X) HA 4 (X) HA 5 (X)(15)" }, { "formula_coordinates": [ 7, 236.47, 472.59, 300.05, 9.65 ], "formula_id": "formula_15", "formula_text": "C 0 = Concat (x 1 , x 2 , x 3 , x 4 , x 5 ) . (16" }, { "formula_coordinates": [ 7, 257.63, 472.91, 283.03, 24.54 ], "formula_id": "formula_16", "formula_text": ") Out = x * V b (C 0 ) + x (17" }, { "formula_coordinates": [ 7, 536.52, 488.11, 4.15, 8.64 ], "formula_id": "formula_17", "formula_text": ")" }, { "formula_coordinates": [ 7, 281.82, 542.65, 258.84, 8.96 ], "formula_id": "formula_18", "formula_text": "A = SA(x)(18)" }, { "formula_coordinates": [ 7, 248.2, 560.79, 288.32, 8.96 ], "formula_id": "formula_19", "formula_text": "Out = Sig{BN [Conv(x)]} (19" }, { "formula_coordinates": [ 7, 536.52, 561.11, 4.15, 8.64 ], "formula_id": "formula_20", "formula_text": ")" }, { "formula_coordinates": [ 9, 251.36, 352.4, 289.31, 22.31 ], "formula_id": "formula_21", "formula_text": "DSC = 2TP 2TP + FP + FN(20)" } ]
10.1109/tssc.1968.300136
2023-12-10
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b11", "b20", "b20" ], "table_ref": [], "text": "Crossword puzzles have gained immense popularity as a widely played language game on a global scale. Daily, millions of individuals engage in the challenge, requiring a combination of skills. To solve crosswords effectively, humans need to possess a broad vocabulary, general knowledge across various subjects, and the ability to decipher wordplay and puns. Human solvers should master the crossword language, its peculiarities, and specific knowledge belonging to the country in which it is spoken. They must also excel in pattern recognition, interpret contextual clues accurately, employ problem-solving strategies, and demonstrate patience and perseverance. Mastering these skills enables individuals to tackle crossword puzzles with efficiency, accuracy, and a higher likelihood of success. This scientific paper introduces a novel version of WebCrow 2.0, an AI-powered application specifically designed for efficiently solving French crosswords. It represents the first of its kind in the realm of French crosswords, building upon the previous versions developed for Italian and American crosswords. We will discuss the peculiarities of the French version in section 4 and the underlying architecture in section 5. Solving crosswords based on clues is widely recognized as an AI-complete problem [12], owing to its intricate semantics and the extensive breadth of general knowledge required. Artificial intelligence has recently shown an increasing interest in crossword solving. [21] Through this work we are introducing a notable milestone in the literature, the French WebCrow system, which achieved humanlike performance on French crosswords by leveraging numerous knowledge-specific expert modules. WebCrow 2.0 can rely on a limited amount of previously solved crosswords and clue-answers pairs. In the case of French crosswords, WebCrow 2.0 made use of about 7.000 previously solved crossword puzzles and about 312,000 unique clueanswers pairs. Studies in American crosswords rely on millions of clue-answers pairs, 6.4M [21], and on the fact that almost all of the answers are in previously seen crosswords. This is not the case with French crosswords, for which the availability of a huge collection is limited, thus a more robust approach is required. The primary objective of French WebCrow is to establish its competitiveness against human crossword solvers by leveraging expert modules, NLP (Natural Language Processing) technologies, web search, and merging techniques to efficiently generate candidate answer lists and fill crossword grids accurately. The goal of the web search source of information is to provide accurate solutions to crossword puzzles without the burden of maintaining an up-to-date multitude of domain-specific modules. By tapping into the web as an extensive source of information, French WebCrow offers the promise of scalability and adaptability. The upcoming sections provide information on related works and a comprehensive overview of the various components of WebCrow 2.0. Detailed explanations will be given on the French WebCrow version, accompanied by a thorough analysis of the experimental results. Finally, the paper will conclude by summarizing the findings and highlighting the significance of this research in the field of crossword solving." }, { "figure_ref": [ "fig_0" ], "heading": "Related Works", "publication_ref": [ "b13", "b9", "b0", "b6", "b1", "b12", "b8", "b2", "b0", "b6", "b1", "b12" ], "table_ref": [], "text": "In the literature, various attempts have been made to solve crossword puzzles. However, none of these approaches have adequately addressed the specific challenges posed by French crosswords. In the following, we will delve into a review of existing works that have tackled the task of solving crosswords. One of the first works on crossword solving is Proverb [14], which tackles American crosswords. The system makes use of independent programs that solve specific types of clues, leveraging information retrieval, database searching, and machine learning. During the grid filling phase, it tries to maximize the number of most probable words in the grid, using a loopy belief propagation, combined with A* search [10]. Taking into account the Proverb experience, WebCrow [1,7,2] is the first crossword solving for Italian crosswords. WebCrow introduces the use of a Web Search Module (WSM), that extracts and filters potential answers from the Web, being this an extremely rich and self-updating repository of human knowledge. Additionally, the system retrieves clues from databases of previously solved crossword puzzles (CPs). A merging process aims to consolidate the potential solutions from both web documents and previously solved CPs. Subsequently, the system employs a probabilistic Constraint Satisfaction Problem (CSP) approach, similar to the Proverb system [13], to fill the puzzle grid with the most suitable candidate answers. Both Proverb and WebCrow proved to be better-than-average cruciverbalists (crossword solvers). Following these experiences, we can find Dr.Fill work [9], a program designed to solve American-style crossword puzzles. Dr.Fill converts crosswords into weighted Constraint Satisfaction Problems (CSPs) and utilizes innovative techniques, including heuristics for variable and value selection, a variant of limited discrepancy search, and postprocessing and partitioning ideas. The program's performance in the American Crossword Puzzle Tournament suggests it ranks among the top fifty crossword solvers globally. In the field of crossword solving, there is also SACRY [3], introduced in 2015, a system that leverages syntactic structures for clue reranking and answer extraction. The authors build upon the foundation of WebCrow [1,7,2] to develop SACRY. The system utilizes a database of previously solved crossword puzzles (CPs) to generate a list of candidate answers. One of the key contributions of SACRY is its emphasis on exploiting syntactic structures. By incorporating syntactic analysis, SACRY improves the quality of the answer list, enhancing the accuracy of crossword puzzle resolution. Recently, there is the Berkeley Crossword Solver, a cutting-edge approach that revolutionizes automatic American crossword puzzle solving. The system employs neural question-answering models to generate answer candidates for each crossword clue and combines loopy belief propagation with local search techniques to discover complete puzzle solutions. One of the standout features of the Berkeley Crossword Solver is its use of neural question-answering models, which significantly enhances the accuracy of generating answer candidates. In the subsequent sections, we will provide a comprehensive and detailed explanation of the various components comprising our system. We aim to delve into each part, elucidating its functionalities and intricacies, to offer a thorough understanding of our system's architecture and its underlying mechanisms.\n3 Overview of WebCrow 2.0 WebCrow 2.0 is based on the previous WebCrow project experience ([7]). As shown in Fig. 1, WebCrow has a first phase of clue analysis and clue answering. For each clue a list of candidate answers, of the suitable length, is generated by a variable number of experts. Then, all ordered lists are merged into a unique list for each clue. The merging phase takes into account information like the expert module's confidence, the clue type and the answer length. The list merger module and list filtering module, based on morphological information, are both trainable on data. Next comes a belief propagation step( [13]) which reorders the candidate lists based on the puzzle constraints. Finally, the last step is the realsolving mechanism that actually fills the grid with letters, using a new grid-filling approach, the Char Based Solver algorithm. " }, { "figure_ref": [], "heading": "Modularity", "publication_ref": [ "b18" ], "table_ref": [], "text": "WebCrow 2.0 has a modular architecture, based on Redis as a communication backbone. Redis implements a Publish/Subscribe messaging paradigm which allows asynchronous communication between agents of nearly every programming language [19]. The advantage is that with little effort we are able to design expert modules for new languages or based on state-of-the-art natural language processing techniques.\nBased on our experience, expert modules should cover these three types of knowledge:\n-Lexical and Ontological Knowledge: knowledge about the way we use language to represent the world and organize information. -Crossword-specific experiential Knowledge: frequent crossword clueanswer pairs, specific conventions and rules which recur in crossword puzzles.\n-Factual and Common Knowledge: encyclopedic knowledge, common sayings, facts, and events of a common cultural background. The Web can be viewed as a repository of this kind of knowledge.\nIn the next section, we are going to analyze in more detail the most crucial expert modules that contribute to the creation of candidate answer lists." }, { "figure_ref": [], "heading": "The Expert Modules", "publication_ref": [ "b16", "b10", "b15", "b5", "b4", "b21" ], "table_ref": [], "text": "Word Embedding expert The Word Embedding expert takes into account the idea that crossword puzzles often contain knowledge that has already been encountered in previously solved crosswords. Word embeddings [17,11,16,6,5] offer a way to map individual words or sequences of words (sentences) to specific vectors within a high-dimensional geometric space. This mapping ensures that similar words or sentences are located in close proximity to each other, while sentences with unrelated meanings are positioned far apart.\nBuilding upon a retrieval and ranking approach for crossword clue answers [22], this expert employs the Google Universal Sentence Encoder (USE) to embed each puzzle clue. It then searches for the most similar clues within the clue-answers dataset, leveraging the capability of word embeddings to discover linguistic connections between clues." }, { "figure_ref": [], "heading": "WebSearch expert", "publication_ref": [ "b14" ], "table_ref": [], "text": "The Web Search Module utilizes web documents and search engines to identify suitable answers for crossword clues. It consists of a web-based list generator, a statistical filter, and an NLP category-based filter. The module excels in handling longer word or compound word targets. It is particularly useful for obtaining up-to-date data that may not be available in other modules. In our current implementation, we have seamlessly integrated the Bing API [15], but it is also feasible to utilize alternative search APIs." }, { "figure_ref": [], "heading": "Knowledge Graph expert", "publication_ref": [ "b7" ], "table_ref": [], "text": "In this paper, we introduce a novel expert that utilizes expert.ai's linguistic knowledge graph [8], which provides a domain-independent representation of the real world through concepts and their related meanings and the different relationships that exist among concepts. Each linguistic concept is explained using its similar meanings, its definition, and its related concepts extracted from the Knowledge Graph. The concept is then mapped using word embedding, which enables a search similar to the Word Embedding expert. By employing word embedding techniques, the concept can be effectively searched, similar to the functionality of the Word Embedding expert. This new expert has proven to be invaluable in solving clues that require both lexical and ontological knowledge, such as \"Sick\" [ILL] or \"Supportive kind of column\" [SPINAL]. Inside expert.ai Knowledge Graph \"sick\" and \"ill\" are two words belonging to the same concept, they are synonyms. As far as \"spinal\", there is a concept \"spinal column\" which is a specification (kind of) of the concept \"column\".\nOther Expert Systems for Language-Specific Crosswords Expert systems for language-specific crosswords are designed to cater to the specific nuances of the language. For example, in Italian crosswords, there are often word plays with 2-letter answers. To address this, a hard-coded expert system has been developed that encodes many of the possible types of word plays, resulting in high-confidence answers. A similar approach has been taken for French solvers, as described in Section 5.3. However, such a situation is not present in American-style crosswords, where the minimum number of letters for an answer is 3." }, { "figure_ref": [], "heading": "Merging", "publication_ref": [], "table_ref": [], "text": "Once all the experts have produced their outputs, which are lists of candidate words each one associated with a probability the list is merged together in a unique list. The merging procedure consisted of a weighted average of the experts list based on the length of the answer, the weights are picked based on a specific training phase." }, { "figure_ref": [], "heading": "Grid Filling", "publication_ref": [ "b3" ], "table_ref": [], "text": "For the grid-filling phase, we made use of a Char Base Solver. This approach is more robust in case some candidate lists do not have the correct answers, which is very likely in French crosswords. For each slot s we cumulate the probability mass p s d (c) of a letter c, in a given direction d (Across or Down), adding all the probabilities of words that contain letter c in the slot s with direction d. We compute the probability mass p s (c) as:\np s (c) = p s A (c) • p s D (c),(1)\nThis can be seen as the probability of the letter c being correctly inserted in a given cell, considering the constraint network and the answer lists. We then use two criteria to assign to the given box the letter c and in this way constrain the grid filling.\n(p s (c) > 99.99%) and (best\nA (c) == best D (c))(2)\n(p s (c) > 99.00%) and (best\nA (c) == best D (c)) and (p s A (c), p s D (c) > 90%) (3)\nIn other terms equation 2 states that a letter c is chosen for a cell if the confidence on that letter being in that cell is higher than 99.99% and it is the most likely prediction in both directions. Where best A (c) is the most likely letter in the across direction and best D (c) the most likely in down direction. Obviously this two letter must be the same.\nEquation 3 instead states that if the confidence on a given letter being in a given cell is only 99.00% then it is not enough to be the most likely for both directions (best A (c) == best D (c)) but that letter must have more than 90% probability for both directions.\nIf either of these criteria is met, then the character is assigned to that particular position. Otherwise, it will be filled in a second phase with the most probable word that does not break any other char-based constraint. In the unlikely event that no word satisfies the bond, the cell is left unfilled or could be filled by another post-processing expert, such as an implicit module. 4 The French Crosswords" }, { "figure_ref": [], "heading": "Format and Rules", "publication_ref": [], "table_ref": [], "text": "The French crossword format is similar to Italian crosswords. Unlike American crosswords, two-letter words and \"Blind cells\" (cells that belong to only one word) are allowed. Stacked answers made up of multiple words are less common in French crosswords and generally correspond to expressions. French crossword puzzles vary greatly in size and in the type of knowledge used. In the next sub-sections, we will describe in more detail these aspects." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "French Crosswords Dataset", "publication_ref": [], "table_ref": [], "text": "For the French dataset, we collected over 300,000 clue-answer pairs, with the answer length distribution shown in Figure 2. Additionally, we compiled a collection of approximately 7,000 solved crossword puzzles from diverse sources. We owe our success in this endeavor, completed in just a few months, to the invaluable collaboration of two prolific authors, Serge Prasil and Michel Labeaume. As we can see in table 1, the French dataset of previously seen clue-answers pairs and crosswords is comparable to the Italian dataset, while the American dataset is considerably huger. Moreover, American crosswords are more standard. Almost all clue answers are present in previous crosswords, which is not the case with French crosswords. In figure 2 we show the statistics of the answer length present in French crosswords. The majority of the answer's lengths are below 10. Answers with higher lengths are covered by verb inflections, compound words, or linguistic expressions. " }, { "figure_ref": [], "heading": "Linguistic and Cultural Peculiarities", "publication_ref": [ "b3", "b3" ], "table_ref": [], "text": "Unlike Italian and American crosswords, French crosswords use a wide range of verb inflections in their solutions, covering nearly every possible tense and person. However, the definitions provided in the clues often lead to the correct inflection. Furthermore, we have observed that French crossword authors have distinct individual styles that vary greatly from one another. As in other crossword languages, the aim of a crossword author is to provide clues that are obscure enough while keeping solutions that should appear obvious once found [4]. He must find the right level of difficulty for all the pairs of solutions. When this level is too high, the risk is to discourage people from trying to solve the crossword. On the contrary, if the clues are too simple, it is a memory or patience game, but there is no challenge, and usually, French crossword players prefer tricky enigmas, with few clues, twisted words, or traps. French crossword authors inherit from the art of conversation in classical French culture, which is well represented by the periphrase \"la langue de Molière\" to designate French. As a result, French authors take pride in being witty in the definitions they provide. They must be creative in finding jokes that make the solver laugh [4], which leads to the development of distinct individual styles." }, { "figure_ref": [], "heading": "Examples of clues in French crosswords", "publication_ref": [], "table_ref": [], "text": "In this section, we categorize the types of clues found in French crosswords and provide illustrative examples. Some of the examples are very specific to the French language, in particular the examples given in sections Inflections or Domain Specific Knowledge, and some other examples related for instance to rare words or word games can be found in other languages as well.\nInflections French crosswords make extensive use of rare verb tenses and modes, which can make it challenging to find the correct inflection of the word to be guessed through a direct web search. For instance, in the following clue answer pair: Auraient des soucis excessifs [CAUCHEMARDERAIENT], the verb to guess \"cauchemarder\", which means \"having a nightmare\", is rarely used at the conditional present, at the third person plural. In another example, Apitoie [ATTENDRISSE], the clue can refer to either the first or third person at the indicative or subjunctive present tense. Depending on the verbal group of the solution, the inflection can vary significantly at these tenses and persons.\nRare words Some clues may involve words that are rare in French, either because they are ancient words or foreign words, or these words belong to the literary register or, conversely, to the colloquial or slang register. For instance, the solution of the clue Dessiner sans soin [STRAPASSER] is an old verb. As the frequency of these words is low, they may appear with a very low probability, and in some cases, they may not appear at all in the candidate solutions list. Domain Specific Knowledge Some puzzles require domain-specific knowledge, such as very specific geographical knowledge. For example, a clue may be: Elle habite une commune située dans le département de l'Isère [SICCIOLANDE], meaning that we need to search for the name of the female inhabitants of a city in a specific French department. There is no generic rule in French for determining the name of the inhabitants from the name of the city, and sometimes the name of the inhabitants (in this case, \"SICCIOLANDE\") can be very different from the city name; in this example, the city name is \"Siccieu-Saint-Julien-et-Carisieu\". Therefore, solving this type of riddle requires a combination of encyclopedic knowledge, spelling rules, and potential knowledge of spelling exceptions.\nThe following example requires specific knowledge of French literature: Le bleu et le blanc du poète [OE]. This example pertains to the poem \"Voyelles\" by the renowned French poet Arthur Rimbaud, where each vowel is linked to a color. In this poem, the vowel \"O\" is associated with the color blue (\"bleu\"), and the vowel \"E\" is associated with the color white (\"blanc\") Generic Words With Few Indices On the other hand, some clues may consist of a few generic words such as color names and adverbs, which can be linked to numerous solutions. In such cases, the definition is not clearly connected to the answers, making automatic graph search more challenging. For instance, consider the following clue: Pétales de rose [ESE]. One may be misled by the words \"Pétales\" and \"rose\", which could refer to the lexical field of flowers. However, in French crosswords, they refer to the compass rose, and the solution could be of the type ESE (\"Est, Sud, Est\" meaning direction East, South, East), NN, NSN, and so on.\nWord Games Word games are a type of clue in which the solver must manipulate the multiple meanings of the words in order to arrive at the solution.\nIn crossword puzzles, common word games involve the letters of a single word, which may be either part of the clue or part of another word that must be guessed. For example, consider the clue A la sortie de Strasbourg [RG]. The phrase \"A la sortie de\" translates to \"At the exit of\" and suggests that the solution is composed of the last letters of the word \"Strasbourg\". This clue is made more challenging by the fact that \"Strasbourg\" is a proper noun, and solvers may be tempted to look for a solution that is geographically related to the city.\nTwo Steps Clues Some crossword puzzles can be challenging as they require two or more steps to arrive at the solution. For instance, consider the clue À l'envers : coût [FIRAT]. To solve this puzzle, one must first identify a synonym for the word \"coût\" (TARIF) and then invert the letters (FIRAT), as indicated by the phrase \" À l'envers :\". Similarly, in the clue Grecque a l'envers [ATE], the solver must recognize that \"Grecque\" refers to a Greek letter before inverting the letters of the word found. In the example Impro de jazz sans voyelle [SCT], while it may seem straightforward to humans, this could prove to be a challenging task for a machine. The solver should find the answer to the definition of \"impro de jazz\" (\"jazz improvisation\") without any information about the word length before removing the vowels.\nMultiple Categories Finally, crossword puzzles often combine multiple difficulties. In this example: Attaquerai les portugaises [ESSORILLERAI], the author Serge Prasil used slang expression \"les portugaises\", to refer to ears. The verb to guess is further an ancient word, a medieval torture that means cutting off the ears, in an unusual form, because it is conjugated at the future." }, { "figure_ref": [], "heading": "The System Architecture", "publication_ref": [], "table_ref": [], "text": "The recent changes in the architecture allowed for easy incorporation of new agents and modification of existing ones by simply adjusting the parameter configuration. For example, the web-search expert (see Section 3.2) was ported to French by modifying the query language in the parameter set.\nTo update the Word Embedding Expert, we required the French crosswords dataset described in Section 4.2. The clues had to be encoded further with the Universal Sentence Encoder, as explained in the Word Embedding expert section (see Section 3.2).\nAfter implementing these two expert agents, we analyzed the results to identify the areas where most errors occurred. We discovered that 29% of missing answers were due to missing verb inflections, and 8% were due to adjective or noun inflections. Among all verb forms, the present tense was used only 20% of the time, while the past simple, a tense rarely used in everyday life, was used 40% of the time. Among the inflections of adjectives, the feminine form was used 58% of the time, and the plural form was used 55% of the time." }, { "figure_ref": [], "heading": "Knowledge Graph Expert", "publication_ref": [], "table_ref": [], "text": "As per the analysis of the most common errors, we have enhanced expert.ai's French knowledge graph. The results analysis revealed the need to incorporate inflections of verbs, adjectives, and nouns. To achieve this, we followed the same approach as described in Section 3.2. However, in this case, in addition to adding the connected concepts with the same description, we also included the required inflections." }, { "figure_ref": [], "heading": "Lexicon", "publication_ref": [ "b17" ], "table_ref": [], "text": "In addition, we identified a need to enhance the lexicon utilized by WebCrow. To address this, we incorporated Lexique 3.83, a French lexicon database containing approximately 123K distinct entries of at least 2 letters, as in [18]. We combined this dataset with data from a French dictionary, resulting in a final lexicon comprising approximately 198K words." }, { "figure_ref": [], "heading": "Rule-Based Expert", "publication_ref": [], "table_ref": [], "text": "We have developed a Python-based expert module for French crosswords that can decipher common word games. The module is designed to identify target words in the clues and provide associated lists of solutions. The target words may include Arabic number conversions to words, Roman numerals, chemical elements from Mendeleev's table, French departments, grammar lists (such as personal pronouns, conjunctions, and prepositions), and Greek letters.\nFurthermore, the Rule-based expert was designed to decipher clues that indicate the presence of word games in finding the answers, and where the solution involves either the inversion of a word, a reduced set of letters, or a mix of letters. The word on which the word game applies may be included in the clue or not. In the latter case, which we called \"two steps clues\" in chapter 4.4, the rule-based expert first searches for a list of possible solutions by calling the Word Embedding expert and then applies the word game to the letters of each word in the list." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [], "table_ref": [], "text": "In this section, we present the comprehensive results obtained from our experimentation. Following the development of the system, as outlined in the preceding sections, we proceeded to assess its performance on previously unseen crosswords." }, { "figure_ref": [], "heading": "Test Dataset", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "To ensure a robust evaluation, we carefully selected a dataset comprising 62 distinct crosswords that were published subsequent to the crosswords used for constructing the different experts, such as the Word Embedding expert3.2. This selection criterion ensured that there was no overlap between the crosswords utilized for training and those employed for testing purposes. To evaluate the performance of our proposed solution, we conducted an extensive analysis using a diverse set of crossword puzzles sourced from multiple authors and Our dataset comprises 10 puzzles each from two renowned creators, Michel Labeaume and Serge Prasil. Furthermore, we incorporated 40 additional crosswords from established publishers to facilitate a thorough assessment. Detailed information about the test crossword can be found in Table 2. We used diverse crosswords to test the system's ability to handle different puzzle styles, author preferences, and construction variations. This approach helped us understand the system's performance and adaptability in unseen crosswords." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_2", "tab_3" ], "text": "We evaluated the system's performance using three distinct metrics: percentage of correct words, which measures the accuracy of inserting the correct target answers, percentage of correct letters, which evaluates the accuracy of inserting individual letters, and percentage of inserted letters, which assesses the system's ability to fill crossword slots. For a comprehensive overview of these metrics across different sources of crosswords, refer to Table 3. It encapsulates the corresponding results obtained from the test sets of various crossword sources, shedding light on the overall performance of our system in solving French Crosswords. Our crossword solver achieved impressive results in solving French crosswords from Michel Labeaume and Serge Prasil, with some 100% solved crosswords. On the other sources, the performance varied a lot, we had some sources with fully correct solved crosswords, while on other crosswords the system performed poorly. Based on our analysis some authors use very specific styles and knowledge, which demonstrates that solving crosswords is an AI-complete and open-domain problem. In some cases, answers were very domain-specific, see section 4.4.\nOverall, these remarkable results demonstrate the robust performance of our system in solving French crosswords. The accuracy rates obtained highlight the system's ability to effectively fill in words and letters, thus confirming its competence in solving French crossword puzzles.\nIn table 4, we tested the system by removing some expert modules. These tests show that each module is necessary to obtain the best results, the Full version, and that different source of knowledge is required to solve crosswords. Unlike American crossword studies, there is not a huge dataset of previously solved crosswords. Moreover, French crosswords are not as standard as American ones. Each crossword can vary a lot, influenced by the style and imprint of its author. To gain insights into our system's strengths, limitations, and relative performance compared to human crossword solvers, we conducted challenging competitions. The subsequent section presents a detailed analysis of these comparative evaluations." }, { "figure_ref": [], "heading": "AI vs Human challenges", "publication_ref": [], "table_ref": [ "tab_4", "tab_5" ], "text": "We organized an internal challenge at INRIA to evaluate our system's performance in a real-world scenario, putting it against human participants. The challenge included French and American crossword puzzles. Both humans and We-bCrow were allowed to utilize web searches during the challenge. The challenge included three crosswords: an easy-medium-level French crossword with a 10-minute time limit (score counted), a medium-hard level French crossword with a 20-minute time limit (score counted), and an American crossword with a 10-minute time limit (score counted). The experimental results, including the performance of WebCrow (Live and Lab), the average human performance, and the best human performance are presented in Table 5. Two modes were implemented: \"WebCrow Live\" where the system ran in real-time with predetermined configurations, and \"WebCrow Lab\" where results were computed in advance in the laboratory. It is important to note that variations in web information could lead to discrepancies between the results of the two modes.\nWe also conducted a public challenge at the World AI Cannes Festival 2023, evaluating the French version of WebCrow. There were three challenges, one for each language: French crosswords, Italian crosswords, and American crosswords. Each challenge had two crosswords valid for the competition with time limits. The two French crosswords were created specifically for the challenge by renowned authors Serge Prasil and Michel Labeaume.\nThe scoring system gave points from 0 to 100 based on the percentage of correct words (0 to 110 for the second crosswords. Then some additional points (maximum 15) were added based on the percentage of time not used. We had 15 minutes for the first crossword and 20 minutes for the second. Finally, in case of a fully correct answer, 15 points were awarded. The detailed experimental outcomes of the WAICF French crossword-solving challenge can be found in Table 6. This challenge provided insights into We-bCrow's performance and its cross-lingual capabilities. Humans cruciverbalist are strong only on one language. In the French crossword challenge, there was no strong human competitor present. This leaves space for further challenges with French experts in crosswords." }, { "figure_ref": [], "heading": "Conclusion and Future Works", "publication_ref": [ "b19", "b20" ], "table_ref": [], "text": "In conclusion, this work represents a significant advancement in the field of crossword solving. By capitalizing on our previous experience in the field we present a novel version of WebCrow 2.0 and its French WebCrow version, which represents the first French crossword solver. In this work we collected a dataset of French crosswords, enabling us to make some comparisons with crosswords in other languages, Italian and American. Moreover, we analyzed the peculiarities of French crosswords. French crossword puzzles vary greatly, they are not standard like the American ones, the size, the knowledge, and the language games involved are influenced by the style and imprint of its author. French WebCrow is an above-human-average crossword solver, but there is still room for improvements. The potential for French WebCrow to achieve competitive performance serves as a strong motivation for further research and development, paving the way for AI-powered crossword solving to reach new heights. There are three main branches for future development. First of all, there is room to improve the performances of both the Italian and French solvers by working on filters and re-ranking based on systems that can predict the grammatical type of the answer. Another improvement can be achieved by leveraging on the output of the Char Based Solver which fills the grid with the most probable letters, leaving empty the cells which have more uncertainty. We would like to implement a system that exploits the letters that are actually fixed to find out the missing ones on the internet or with a Generative Pre-trained Transformer. Another branch of development resides in the intrinsic characteristic of WebCrow 2.0, in which the modularity of its frameworks allows us to add a new language solver with little effort. Of course, as happened for Italian, English, and French, language-specific experts have to be developed to obtain high performances in crossword solving. We are already in touch with German universities to explore this road. The last branch regards the inverse task, the crossword generation [20]. The experience gained, but even more the data collected during the WebCrow 2.0 experience, could represent a launch pad for the complex task of crossword generation. Consider that, for instance, the New York Times crosswords (one of the biggest collections of crosswords) contains an average of 96% of already seen answers, and only the 4% of the answers, on average, are new [21]. This task is still performed principally through semi-automatic proprietary software. New approaches should take into account Generative Pre-trained Transformers, which, at the moment, represent the most advanced approach for generating text and could be tested on generating crossword clues, which may also be ambiguous or tricky, covering different kinds of human knowledge." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This research owes its accomplishment to the generous collaboration of esteemed French crossword authors, Serge Prasil and Michel Labeaume. The University of Siena, expert.ai, and the 3IA Côte d'Azur Investment in the Future projects administered by the National Research Agency (ANR), under the reference number ANR-19-P3IA-0002, provided invaluable support for this endeavor" } ]
Crossword puzzles are one of the most popular word games, played in different languages all across the world, where riddle style can vary significantly from one country to another. Automated crossword resolution is challenging, and typical solvers rely on large databases of previously solved crosswords. In this work, we extend WebCrow 2.0, an automatic crossword solver, to French, making it the first program for crossword solving in the French language. To cope with the lack of a large repository of clue-answer crossword data, WebCrow 2.0 exploits multiple modules, called experts, that retrieve candidate answers from heterogeneous resources, such as the web, knowledge graphs, and linguistic rules. We compared WebCrow's performance against humans in two different challenges. Despite the limited amount of past crosswords, French WebCrow was competitive, actually outperforming humans in terms of speed and accuracy, thus proving its capabilities to generalize to new languages.
The WebCrow French Crossword Solver ⋆
[ { "figure_caption": "Fig. 1 .1Fig. 1. WebCrow Overview.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Statistics on French crosswords dataset.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Dataset of previously seen clue-answers pairs and crosswords.", "figure_data": "Languageunique clue-answers pairs crosswordsAmerican Crosswords3,100K50,000Italian Crosswords125K2,000French Crosswords300K7,000", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Test CrossWords.", "figure_data": "SourceNumber of PuzzlesDimensionMichel Labeaume1010 x 10Serge Prasil1020 x 20Other Sources42Variable max 15 x 15", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Performance of the System on the Test CrossWords.", "figure_data": "SourceWords Accuracy Letters Accuracy Inserted LettersMichel Labeaume92.97%98%100%Serge Prasil91.82%96.9%99.15%Other Sources73.86%81.16%96.99%", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation test.", "figure_data": "Configuration % of correct words % of correct letters % word dropFull65.7175.22-No Rule based65.1674.790.55No Websearch61.6072.684.11No Lexicon61.3671.984.35No KG56.2868.389.43", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Results of the Crossword Solving Competition (INRIA).", "figure_data": "PlayerScore Time (sec.)WebCrow Live 296.18419WebCrow Lab 313.75556AVG Human 50.392570Best Human 104.222700", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Results of the French Crossword Solving Competition (WAICF).", "figure_data": "PlayerScore Time (sec.)WebCrow Live 228,90559WebCrow Lab 249,86368AVG Humans 24.242570Best Human 69,531493", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" } ]
Giovanni Angelini; Marco Ernandes; Tommaso Iaquinta; Caroline Stehlé; Fanny Simões; Kamyar Zeinalipour; Andrea Zugarini; Marco Gori
[ { "authors": "Giovanni Angelini; Marco Ernandes; Marco Gori", "journal": "Springer", "ref_id": "b0", "title": "Solving italian crosswords using the web", "year": "2005" }, { "authors": "Giovanni Angelini; Marco Ernandes; Marco Gori", "journal": "Springer", "ref_id": "b1", "title": "Webcrow: A webbased crosswords solver", "year": "2005-12-02" }, { "authors": "Gianni Barlacchi; Massimo Nicosia; Alessandro Moschitti", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "SACRY: Syntax-based automatic crossword puzzle resolution system", "year": "2015-07" }, { "authors": "Vincent Berthelier", "journal": "", "ref_id": "b3", "title": "L'humour des mots croisés", "year": "2018" }, { "authors": "Daniel Cer", "journal": "", "ref_id": "b4", "title": "Universal sentence encoder", "year": "2018" }, { "authors": "Jacob Devlin", "journal": "", "ref_id": "b5", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Marco Ernandes; Giovanni Angelini; Marco Gori", "journal": "", "ref_id": "b6", "title": "Webcrow: A webbased system for crossword solving", "year": "" }, { "authors": "", "journal": "", "ref_id": "b7", "title": "ai Knowledge Graph", "year": "2023" }, { "authors": "L Matthew; Ginsberg", "journal": "Journal of Artificial Intelligence Research", "ref_id": "b8", "title": "Dr. fill: Crosswords and an implemented solver for singly weighted csps", "year": "2011" }, { "authors": "Peter Hart; Nils Nilsson; Bertram Raphael", "journal": "IEEE Transactions on Systems Science and Cybernetics", "ref_id": "b9", "title": "A Formal Basis for the Heuristic Determination of Minimum Cost Paths", "year": "1968" }, { "authors": "Yitan Li; Linli Xu", "journal": "", "ref_id": "b10", "title": "Word Embedding Revisited: A New Representation Learning and Explicit Matrix Factorization Perspective", "year": "2015" }, { "authors": " Michael L Littman", "journal": "Springer", "ref_id": "b11", "title": "Computer language games", "year": "2000" }, { "authors": "Greg A Michael L Littman; Noam Keim; Shazeer", "journal": "Artificial Intelligence", "ref_id": "b12", "title": "A probabilistic approach to solving crossword puzzles", "year": "2002" }, { "authors": "Greg A Michael L Littman; Noam M Keim; Shazeer", "journal": "", "ref_id": "b13", "title": "Solving crosswords with Proverb", "year": "1999" }, { "authors": "", "journal": "", "ref_id": "b14", "title": "Bing Web Search API", "year": "2023" }, { "authors": "Tomas Mikolov", "journal": "", "ref_id": "b15", "title": "Advances in pre-training distributed word representations", "year": "2018" }, { "authors": "Tomas Mikolov", "journal": "", "ref_id": "b16", "title": "Distributed Representations of Words and Phrases and their Compositionality", "year": "2013" }, { "authors": "Boris Pallier; Christophe & New", "journal": "", "ref_id": "b17", "title": "Openlexicon, GitHub repository", "year": "2019" }, { "authors": " Redis", "journal": "", "ref_id": "b18", "title": "Redis Pub/Sub", "year": "2022-08-22" }, { "authors": "Leonardo Rigutini", "journal": "International Journal on Artificial Intelligence Tools", "ref_id": "b19", "title": "Automatic generation of crossword puzzles", "year": "2012" }, { "authors": "Eric Wallace", "journal": "", "ref_id": "b20", "title": "Automated Crossword Solving", "year": "2022" }, { "authors": "Andrea Zugarini; Marco Ernandes", "journal": "", "ref_id": "b21", "title": "A Multi-Strategy Approach to Crossword Clue Answer Retrieval and Ranking", "year": "2021" } ]
[ { "formula_coordinates": [ 6, 260.58, 444.33, 220.01, 12.69 ], "formula_id": "formula_0", "formula_text": "p s (c) = p s A (c) • p s D (c),(1)" }, { "formula_coordinates": [ 6, 323.92, 524.32, 156.67, 9.65 ], "formula_id": "formula_1", "formula_text": "A (c) == best D (c))(2)" }, { "formula_coordinates": [ 6, 262.67, 540.34, 217.92, 12.69 ], "formula_id": "formula_2", "formula_text": "A (c) == best D (c)) and (p s A (c), p s D (c) > 90%) (3)" } ]
2023-11-27
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b1", "b5", "b37", "b4", "b20", "b22", "b2", "b35", "b36", "b24", "b8", "b38", "b2", "b3", "b21" ], "table_ref": [], "text": "With the development of artificial intelligence, large language models, such as GPT-3 (Brown et al., 2020), GPT-4 (OpenAI, 2023), PaLM (Chowdhery et al., 2022), OPT (Zhang et al., 2022),\nand some other open-source LLMs (Touvron et al., 2023a;Chiang et al., 2023;Touvron et al., 2023b), have showed revolutionary potential in general language understanding and generation. As a critical technique of LLMs, instruction tuning (Ouyang et al., 2022;Shu et al., 2023;Wang et al., 2023cWang et al., , 2022a;;Cao et al., 2023;Li et al., 2023b;Wang et al., 2023a) enables LLMs to correctly follow various kinds of user instructions.\nIn early researches, instruction tuning (Wang et al., 2022a;Xu et al., 2023;Yu et al., 2023;Sun et al., 2023;Ding et al., 2023) mainly focuses on how to construct large-scale, diverse, and highquality instruction data. Recently, (Zhou et al., 2023) proposes a LIMA model which demonstrates that only 1,000 carefully crafted high-quality instructions can enable the model to possess a powerful instruction-following capability. Their results suggest that almost all of the knowledge in LLMs has been learned during pre-training, and only a small number of instruction tuning data is required to activate models to follow instructions and produce high quality responses. Subsequently, there has been a growing interest among researchers in the systematic filtration of high-quality and comprehensive subset from the extensive pool of instruction dataset (Cao et al., 2023;Chen et al., 2023;Li et al., 2023a). However, these data filtration methods rely too much on extra LLMs or mainly focus on the quality of instructions. Different from those methods, this paper proposes a model-oriented approach which selects instruction data based on a new criteria considering three aspects: quality, coverage and the necessity as well. The quality requires the selected instruction data to be good enough for both questions and answers. The coverage requires the selected instruction data to be diverse enough. The necessity indicates that the selected instruction data indeed fill the ability gap for the LLM of interested.\nIn order to select high-quality instruction data from a large dataset, this paper first proposes to use a quality evaluation model to assess all the (instruction, input, output) triplets, and then filter out the instruction data with high-quality scores. After that, we further propose to use a k-center greedy algorithm (Sener and Savarese, 2017) to select instruction data from the high-quality subset. This k-center greedy algorithm could select a subset of data points that are the farthest apart, thereby making the instruction data we collect are diverse and have broader coverage. In this way, we can get a seed instruction dataset for the target LLM finetuning. Due to the difference of pre-training data, model architecture and training processes, different LLMs vary in their abilities, which result in the fact that different LLMs require different kinds of instruction data. In order to further find out the instruction data the specific LLM needed, we fine-tune the given LLM with the seed instruction dataset, and then assess its inference results on all the high-quality instruction dataset. In this way, we can filter out the instructions on which the specific LLM performs poorly, making up an augmented dataset for the target LLM. This augmented dataset indicates the instruction-following capabilities that LLM lacks. Finally, by merging the seed instruction data and the augmented data, we get a highquality, broad-coverage and high-necessity subset from the original large-scale instruction datasets.\nWe then utilize these selected data to fine-tune the target LLM again.\nOur contributions can be summarized as follows:\n(1) We propose a new criteria for instruction data section including quality, coverage and necessity, and verify that they are valuable for the LLM finetuning.\n(2) We propose a model-oriented instruction selection approach which not only considers the quality and coverage of instruction data, but also integrates the necessity of instructions based on the ability of specific LLMs.\n(3) Experimental results show that the LLM finetuned with 4,000 instruction data selected by our approach could achieve a better performance than the LLM fine-tuned with the full original dataset (214k), indicating that our approach is effective in selecting valuable instruction data with highquality, broad-coverage and high-necessity." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b34", "b2", "b11", "b25", "b13", "b11", "b38", "b38", "b2", "b3" ], "table_ref": [], "text": "Recent researches show that instruction tuning could enable LLMs to be tailored to specific domains, tasks, or applications by providing explicit instructions or guidelines (Wei et al., 2021;Cao et al., 2023). In order to enhance the instructionfollowing abilities of LLMs, previous work mainly focus on increasing the data sizes through various strategies (Honovich et al., 2022;Wang et al., 2022a;Taori et al., 2023;Köpf et al., 2023;Honovich et al., 2022). However, the work of Zhou et al. (2023) illustrates that even a small number of constructed high-quality instructions could empower the model with a powerful instruction-following capability. They indicate that most of the knowledge in LLMs have been acquired during the pre-training procedure, and only a limited number of instruction data are enough to activate LLMs to follow instructions and generate high-quality responses. Their work demonstrates significant improvements compared to LLMs which are fine-tuned with similarscale unfiltered data. However, it should be noted that their approach requires manual involvement to select data from extensive datasets, which is both time-consuming and costly.\nMotivated by the work of (Zhou et al., 2023), Cao et al. (2023) proposed an instruction mining approach which adopts a linear quality rule and bag of indicators to evaluate the quality of instruction-following data. However, they do not conduct comparisons with LLMs trained on the complete dataset, and their approach is very complex.\nBesides, Chen et al. (2023) recently propose a ALPAGASUS model which directly leverages an external LLM (chatgpt) to score each instruction and then selects 9k Alpaca data with a threshold. Their model surpasses the performance of the official Alpaca model which is trained on the complete dataset. However, they rely excessively on external LLMs with great performance.\nDifferent from them, Li et al. (2023a) present a self-guided approach for LLMs to independently identify and choose relevant instruction pairs from extensive open-source datasets. In their approach, they introduce an Instruction-Following Difficulty (IFD) metric as a tool to identify gaps in a model's responses versus its autonomous generation capability. It signficantly reduces the need for manual curation and the associated costs for instruction tuning. However, when computing the IFD metric they only adopt one answer for each instruction, which neglects that the responses for each instruciton are diverse. Besides, they don't pay much attention to the quality and coverage of instruction data during selection procedure." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [ "b38", "b38", "b3", "b2", "b12", "b30", "b16", "b38" ], "table_ref": [], "text": "3.1 Which instructions are the valuable data for a given LLM Zhou et al. (2023) show that LLM's knowledge has been mostly learnt during pre-training. Instruction tuning is mainly used to teach a given LLM to learn how to follow a certain pattern when interacting with human, and only a small number of carefully crafted high-quality instructions are enough to equip the given LLM with powerful instruction-following capabilities. However, for different LLMs, as the knowledge and abilities they have learnt during the pre-training procedure are different, the instruction tuning data they require shoud be different as well. Consequently, how to select the most crucial data for a given LLM has garnered much attention of researchers. After analyzing some LLMs and instructions, we find that the valuable instruction tuning data for one given LLM are mainly decided by the following three aspects: Quality. \"Quality\" refers to the data quality of both the instructions and their corresponding responses in the dataset, which directly influences the knowledge LLM learns. As demonstrated in the work of (Zhou et al., 2023;Chen et al., 2023;Cao et al., 2023), high-quality instruction data can effectively enhance LLM's ability to follow instructions.\nCoverage. \"Coverage\" refers to the types of instrucitons the dataset includes. It represents the diversity of one instruction dataset. The more diverse instruction the dataset covers, the greater the potential of stimulating the capabilities of a large language model is. Researches of (Iyer et al., 2023;Wang et al., 2023bWang et al., , 2022b;;Longpre et al., 2023) also show that enhancing the diversity of instruction data can effectively enhance LLM's ability to follow instructions during fine-tuning.\nNecessity. \"Necessity\" indicates the importance and uniqueness of one instruction for fine-tuning a specific LLM. As described in the work of (Zhou et al., 2023), LLMs have already acquired a substantial amount of knowledge and capabilities during pre-training. Instruction tuning primarily fo-cuses on how to use a limited number of instruction data to stimulate LLM's capabilities, enabling LLMs to follow a certain pattern when interacting with human. Due to the knowledge and capabilities LLMs have learned are different, the importance and uniqueness of the same instruction data may vary for different LLMs. For a given instruction, if the LLM could generate a high-quality response, it indicates that the LLM has already owned the ability of following this type of instructions, and this instruction data is non-essential for the finetuning. Conversely, if the LLM cannot generate a good response for that instruction, it suggests that the LLM lacks the ability to follow this type of instructions, and that instruction is necessary for optimizing the LLM's capabilities." }, { "figure_ref": [], "heading": "Instruction Data Selection", "publication_ref": [], "table_ref": [], "text": "As mentioned in the previous section, the process of selecting effective instruction data from a largescale dataset for a given LLM is primarily determined by three aspects: quality, coverage, and necessity. To efficiently select the most valuable instruction data with these three aspects, this paper proposes a model-oritented approach for instruction data selection, which is shown in the top of Figure 1. This approach mainly includes three modules: Quality Evaluation, Diverse Data Selection for Seed Instructions and Augmented Data Selection. The details are presented in the following." }, { "figure_ref": [ "fig_0" ], "heading": "Quality Evaluation", "publication_ref": [ "b10", "b17", "b23", "b0" ], "table_ref": [], "text": "The quality of instruction data plays a crucial role in the learning of instruction-following capabilities for LLMs. Therefore, to select effective instruction data, we first evaluate the qualities of instruction data and their corresponding response in the largescale dataset, and then filter out the higher-quality data from it. When assessing the qualities of instruction data, we utilize the reward-model-deberta-v3-large-v22 model which is developed by OpenAssistant. This is a reward model designed based on the DeBERTa (He et al., 2021) architecture, and is trained on four different types of human feedback data (Nakano et al., 2022;Stiennon et al., 2022;Bai et al., 2022), endowing it with the abilities of QA model evaluation, reward scoring, and detecting potential toxic response via ranking. In this paper, we mainly adopt its reward scoring capability to generate a quality score for each (instruction, input, output) triplet in the large-scale dataset. As shown in Figure 2, some examples with quality scores are displayed.\nAfter generating the quality scores for each (instruction, input, output) triplet, we will filter them with a threshold α. Through collecting the (instruc-tion, input, output) triplet whose quality score is larger than α, we can get a high-quality instruction dataset." }, { "figure_ref": [], "heading": "Diverse Data Selection for Seed Instrucitons", "publication_ref": [ "b21", "b21", "b21", "b21", "b7" ], "table_ref": [], "text": "Algorithm 1 K-Center-Greedy (Sener and Savarese, 2017) Input: data x i , existing pool s 0 and a budget b\nInitialize s = s 0 repeat u = argmax i∈[n]\\s min j∈s ∆(x i , x j ) s = s ∪ u Until |s| = b + |s 0 | return s\\s 0\nAfter getting a high-quality instruction dataset, we will further select data from it. In order to select diverse instruction data with the maximum coverage, we propose to use K-Center greedy algorithm (Sener and Savarese, 2017) for data selection. K-Center greedy algorithm is proposed by (Sener and Savarese, 2017) in 2017, which is a simple yet effective approach used to address the K-Center problem. The objective of the K-Center problem is to choose a subset of K centers from a given set of data points in a manner that minimizes the maximum distance between any data point and its nearest center. This algorithm commences by selecting an initial center, typically the point farthest from any existing centers, and then proceeds to add new centers iteratitively. At each step, it chooses the point farthest from current set of centers. As shown in Algorithm 1 (Sener and Savarese, 2017), it presents the details of this algorithm.\nDuring diverse data selection process, we generate the sentence embeddings for all instructions with BERT (Devlin et al., 2018), which are used to compute the distances of different data points. Through this module, we can get a seed instruction dataset which has a great diversity and broad coverage." }, { "figure_ref": [], "heading": "Augmented Data Selection", "publication_ref": [], "table_ref": [], "text": "For different LLMs, as the knowledge and capabilities they learned in the pre-training procedure are different, the instruction tuning data they require will be different as well. For one instruction, if the given LLM could generate a good response, it indicates that the given LLM has owned the ability to handle this type of instruction, and this instruc-tion data is not necessary for the fine-tuning of the LLM. Conversely, if the LLM cannot generate a good response, it suggests that the LLM couldn't effectively process that type of instruction data, and the instruction data is very important and unique for the fine-tuning of the target LLM.\nIn section 3.2.2, we have generated a seed instruction dataset with high-quality and broadcoverage. However, as the valuable instructions vary for different LLMs, the seed instruction dataset may not include all the instructions the target LLM needs. In order to find out these missed instructions, we first fine-tune the pre-trained LLM with the seed instruction dataset, generating an initial LLM. Then we generate the responses of all the instructions in high-quality dataset with the initial LLM. After that, we will use a necessity evaluation model to compute a review score for each instruction and its generated response. In this paper, we still adopt the reward model used in section 3.2.1 as the necessity evaluation model. If the review scores are less than the threshold β, it represents that the initial LLM could not generate good responses for these instructions, and it doesn't own the capabilities to handle that types of instructions. After collecting all the instructions with low review scores, we will again use the K-center greedy selection algorithm described section 3.2.2 to select a subset from them, and then build an augmented dataset. This dataset could effectively compensate for the capability deficiencies of the initial LLM." }, { "figure_ref": [], "heading": "Fine-tuning with Selected Instruction Data", "publication_ref": [], "table_ref": [], "text": "Following the methods outlined in the previous section, we can get a seed instruction dataset and its augmented dataset for a given LLM. After that, we will merge these two datasets, and then finetune the raw pre-trained LLM. This process has been shown in the bottom part of Figure 1. In this way, we can get the final LLM which has a good instruction-following capability. The raw pretrained LLM used in this paper is LLaMA 2 (Touvron et al., 2023b)." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "4.1 Datasets" }, { "figure_ref": [], "heading": "Training set", "publication_ref": [ "b25", "b25", "b35", "b6", "b18", "b38" ], "table_ref": [], "text": "Alpaca. In this paper, we use Alpaca (Taori et al., 2023) which is built by Stanford University as one of the original instruction datasets. This dataset comprises 52,002 (instruction, input, output) triplets. It was created using the self-instruct approach (Wang et al., 2022a) with ChatGPT. And the LLM trained on this dataset shows a good instruction-following ability. However, relying too much on ChatGPT make researches concern about the quality of instruction data. Mixture Dataset. In addition to Alpaca, we also build a much larger mixture instruction dataset as the original training data. In this dataset, we mix the instruction data from HC3 (Guo et al.), alpaca (Taori et al., 2023), alpaca-evol-instruct (Xu et al., 2023), dolly-v2 (Conover et al., 2023), In-structWild (Ni et al., 2023) and lima (Zhou et al., 2023), and then construct a mixture instruction dataset which includes 214,526 (instruction, input, output) triplets. Compared to Alpaca, this dataset contains more diverse and rich instructions ranging from open-domain, medical, legal, financial and so on." }, { "figure_ref": [], "heading": "Test set", "publication_ref": [ "b28", "b35", "b4", "b38" ], "table_ref": [], "text": "In order to evaluate the performance of our proposed approach, we also utilize five different test sets as in the work of (Li et al., 2023a), including Koala (Vu et al., 2023), WizardLM (Xu et al., 2023), Self-instruct (Wang et al., 2022a), Vicuna (Chiang et al., 2023) and LIMA (Zhou et al., 2023). These test sets contain 180, 218, 252, 80 and 300 human-curated instruction data respectively, covering math, coding, writing, knowledge, computer and other domains." }, { "figure_ref": [], "heading": "Detatils of Training and Testing", "publication_ref": [ "b25", "b38", "b3", "b3" ], "table_ref": [], "text": "Training details. In this paper, we adopt LLaMA 2 (Touvron et al., 2023b) with 7B parameters as the raw LLM for fine-tuning. During fine-tuning procedure, we utilize the same hyperparameters as the work of (Taori et al., 2023), which include a learning rate of 2e-5, a warmup ratio of 0.03, a weight decay of 0.0 and a batch size of 128. Besides, the fine-tuning epoch is set to 3. And we conduct all fine-tuning and evaluation experiments on NVIDIA RTX A100. During the procedure of quality evaluation and necessity evaluation, both of the threshold α and β is set to 0.0 for Alpaca dataset, while they are set to 1.0 and -1.0 respectively for Mixture dataset.\nTesting details. During testing process, human evaluation is the most accurate and reliable approach to evaluate the instruciton-following capabilities of LLMs. However, this approach is very time-consuming and costly. Moreover, the evaluation results may also be effected by human biases. Consequently, in this paper, we also utilize Chat-GPT and GPT-4 for the evaluation of LLMs as in the work of (Zhou et al., 2023;Chen et al., 2023;Li et al., 2023a). During evaluation process, all the LLMs are prompted to generate the responses for all of the instructions in test sets. Subsequently, the evaluation LLM is prompted to assign a score for each of these responses based on the aspects of relevance and accuracy. And the score is on a scale from 1 to 10. Besides, in order to eliminate the impact of positional bias on the judements, following the work of (Chen et al., 2023;Li et al., 2023a), we also evaluate the responses of two given LLMs twice, but with different ordering in the prompts. Finally, we will compare their scores in these two times of evaluations respectively, and the criteria of winning is presented in the following:\nWins: If the model outperforms in both comparions or wins in one while tying in the other.\nTie: If the model ties in both comparions or wins in one while losing in the other.\nLoses: If the model loses in both comparisons or ties in one while losing in the other." }, { "figure_ref": [ "fig_1" ], "heading": "Results and Analysis", "publication_ref": [], "table_ref": [ "tab_1", "tab_1" ], "text": "This section mainly present the performance of our approach on different test sets. As shown in Figure 3, it presents the comparison of our MoDS model with the model trained on the full alpaca dataset. During fine-tuning procedure, both the size of seed instruction dataset and augmented instruction dataset of our MoDS model are 500. From this figure we can see that our MoDS approach which only adopts 1000 instruction data achieves a better performance than the model trained on the full alpaca dataset, which utilizes 5,2000 instructions. This results indicate that our instruction data selection approach is effective, and a small number of high-quality, broad-coverage and high-necessity selected instruction data could also make LLMs have a powerful instruction-following ability.\nIn order to compare our method with the selfguided instruction data selection approach proposed by (Li et al., 2023a), Table 1 shows their comparisons with the corresponding models trained on full Alpaca dataset. In the work of (Li et al., 2023a), they introduce a Instruction-Following Difficulty (IFD) metric as a tool to identify gaps in a model's responses versus its autonomous generation capability and then select 5% percentage (about 2600 instructions) of the full alpaca data to fine-tune the raw LLM. In Table 1, Self-guided represents the model fine-tuned with the 2600 instruction data 3 selected by self-guided approach. And MoDS(1000) represents the model fine-tuned with 500 seed instruction data and 500 augmented instruction data which are choosen by our approach, while MoDS(2000) represents the model fine-tuned with 1000 seed instruction data and 1000 augmented instruction data. For all of them, the pretrained language model is LLaMA 2. From this table, we can see that MoDS(1000) is comparable to Self-guided on Vicuna, Koala, WizardLM and LIMA test sets, while it is better than Self-guided on Sinstruct test set. And MoDS(2000) is better than Self-guided on all of the test sets. It should be noted that the numbers of instruction data utilized by MoDS(1000) and MoDS(2000) are smaller than the Self-guided model. The results demonstrate that our model-oriented approach can better select instruction data the target LLM needs, and then effectively enhance LLM's instruction-following capabilities.\nIn addition to the Alpaca dataset, Figure 4 presents the comparison results of our MoDS model trained on the selected data with the model trained on full Mixture Instruction Dataset. While fine-tuning on this dataset, the size of seed instructions and augmented instructions of our MoDS model are 1000 and 3000 respectively. From this figure, we can see that MoDS performs significantly better than the model trained on full mixture dataset. However, our MoDS model only adopt 4,000 instructions to fine-tuning the pre-trained language model while the model trained on full Mixture Dataset utilize 214K instructions. This result once again demonstrates that our proposed approach could effectively select valuable instruction data from large-scale datasets for a target LLM. " }, { "figure_ref": [ "fig_4", "fig_4", "fig_5" ], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "To select instruction data with maximum coverage, this paper proposes to use K-center greedy algorithm to select data from high-quality datasets. In order to analyze the effect of K-center greedy algorithm on data selection, Figure 5 shows the comparison results of K-center greedy and random sampling approaches on diffferent testsets. In this figure, we first select data from the high-quality dataset with K-center greedy algorithm and random sampling approach respectively, and then fine-tune the pre-trained language model with the selected subsets. The number of selected instruction is 1000, and the original instruction dataset is the Mixture Dataset which includes 214k instrucitons. From this figure, we can see that the model fine-tuned with K-center greedy algorithm performs much better than the model which is fine-tuned with random sampling approach. It indicates that K-center greedy algorithm could select more valuable and diverse instruction data from high-quality dataset.\nIn Figure 6, we compare MoDS with the model which is just fine-tuned with the seed instruction data extracted from Mixture Dataset. Through this way, we can check whether the augmented instruction data could further the ability of LLMs. Instead of selecting 1,000 seed instruciton data and 3,000 augmented instruction data respectively, in Figure 6 we directly select 4,000 seed instruction data from the high-quality subset of Mixture Dataset. After that, we utilize these 4,000 instruction data to fine-tune the pre-trained language model and compare its performance with MoDS. From this figure, we can see that MoDS is much better than the model fine-tuned with 4,000 seed instruction data. This result demonstrates that the augmented instruction data could effectively compensate for LLM's capacity gaps, thus further enhancing its instruction-following capability.\nTo investigate the impact of instruction number on LLMs in our approach, Figure 7 presents the winning scores of our models with different numbers of augmented instruction data on Mixture Dataset. Following the work of (Li et al., 2023a), the winning score is also computed by (Num(win)-Num(lose)/Num(all)) + 1. The number of \"win\", \"lose\" and \"all\" are also computed across all five test sets. And the values of winning score which higher than 1.0 represents our model performs better than the model fine-tuned with full Mixture Dataset, while the values below 1.0 indicate that our model's performance is worse than the full Mixture Dataset model. From this figure, we can see that the performance of our models could effectively improve when we increase the number of augmented instruction data. This result also illustrates that the augmented data are very valuable to enhance the instruction-following capabilities of LLMs. Furthermore, when the size of augmented dataset reaches 3000, the performance of the model no longer significantly improves. This result suggests that using 3,000 augmented instruction data for Mixture Dataset is already enough in compensating for the model's capability shortcomings." }, { "figure_ref": [], "heading": "Conclustion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a model-oriented instruction data selection approach to select valuable instructions for a target foundation LLM. During the selection of instruction data, our approach not only considers the quality and coverage of instruction data, but also integrates the necessity of instructions based on the ability of target LLM. First of all, in our approach, we use a quality evaluation model to evaluate all the (instruction, input, output) triplets in the datasets, and then filter out the instructions with high quality. Secondly, we use a K-center greedy algorithm to select a seed instruction dataset from the high-quality dataset, which makes the selected data as diverse as possible and have a broad coverage. Thirdly, we use the seed instruction dataset to fine-tune the foundation LLM, and then evaluate the fine-tuned LLM on all high-quality instructions to find out the augmented instruction data for the target LLM, which could effectively compensate for the model's capability gaps. Finally, by merging the seed instruction data and the augmented data, we can get a high-quality, broad-coverage and high-necessity dataset from the original large-scale datasets. The final selection dataset is used to fine-tune the foundation LLM to generate the optimized LLM which have the powerful instruction-following capability." } ]
Instruction tuning has become the de facto method to equip large language models (LLMs) with the ability of following user instructions. Usually, hundreds of thousands or millions of instruction-following pairs are employed to fine-tune the foundation LLMs. Recently, some studies show that a small number of highquality instruction data is enough. However, how to select appropriate instruction data for a given LLM is still an open problem. To address this problem, in this paper we present a model-oriented data selection (MoDS) approach, which selects instruction data based on a new criteria considering three aspects: quality, coverage and necessity. First, our approach utilizes a quality evaluation model to filter out the high-quality subset from the original instruction dataset, and then designs an algorithm to further select from the high-quality subset a seed instruction dataset with good coverage. The seed dataset is applied to fine-tune the foundation LLM to obtain an initial instructionfollowing LLM. Finally, we develop a necessity evaluation model to find out the instruction data which are performed badly in the initial instruction-following LLM and consider them necessary instructions to further improve the LLMs. In this way, we can get a small highquality, broad-coverage and high-necessity subset from the original instruction datasets. Experimental results show that, the model finetuned with 4,000 instruction pairs selected by our approach could perform better than the model fine-tuned with the full original dataset which includes 214k instruction data. Codes, data, and models are available 1 .
MoDS: Model-oriented Data Selection for Instruction Tuning
[ { "figure_caption": "Figure 2 :2Figure 2: Examples of instruciton data with quality scores.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The comparison of our MoDS model trained on selected data with the model trained on full Alpaca dataset.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "3 https://github.com/MingLiiii/Cherry_LLM", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :Figure 5 :45Figure 4: The comparison of our MoDS model trained on selected data with the model trained on full mixture dataset.", "figure_data": "", "figure_id": "fig_3", "figure_label": "45", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: The comparison of MoDS model and the model which is just fine-tuned with seed instruction data of Mixture Dataset.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: The winning scores of our models with different number of augmented data on Mixture Dataset. All the comparisons of these models are judged by Chat-GPT.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "The comparisons between different instruction selection models and full Alpaca model on different test sets, using GPT-4 as the judge.", "figure_data": "Test setsVicunaKoalaWizardLMSinstructLIMAwin tie lose win tie lose win tie lose win tie lose win tie loseSelf-guided60713112 4424127 5734122 8248197 4855MoDS(1000) 571013994833123 5738136 7046174 7056MoDS(2000) 67310116 4024132 5729134 7741195 5847", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" } ]
Qianlong Du; Chengqing Zong; Jiajun Zhang
[ { "authors": "Yuntao Bai; Andy Jones; Kamal Ndousse; Amanda Askell; Anna Chen; Nova Dassarma; Dawn Drain; Stanislav Fort; Deep Ganguli; Tom Henighan; Nicholas Joseph; Saurav Kadavath; Jackson Kernion; Tom Conerly; Sheer El-Showk; Nelson Elhage; Zac Hatfield-Dodds; Danny Hernandez; Tristan Hume; Shauna Scott; Liane Kravec; Catherine Neel Nanda; Dario Olsson; Tom Amodei; Jack Brown; Sam Clark; Chris Mccandlish; Ben Olah; Jared Mann; Kaplan", "journal": "", "ref_id": "b0", "title": "Training a helpful and harmless assistant with reinforcement learning from human feedback", "year": "2022" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b1", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Yihan Cao; Yanbin Kang; Lichao Sun", "journal": "", "ref_id": "b2", "title": "Instruction mining: High-quality instruction data selection for large language models", "year": "2023" }, { "authors": "Lichang Chen; Shiyang Li; Jun Yan; Hai Wang; Kalpa Gunaratna; Vikas Yadav; Zheng Tang; Vijay Srinivasan; Tianyi Zhou; Heng Huang; Hongxia Jin", "journal": "", "ref_id": "b3", "title": "Alpagasus: Training a better alpaca with fewer data", "year": "2023" }, { "authors": "Wei-Lin Chiang; Zhuohan Li; Zi Lin; Ying Sheng; Zhanghao Wu; Hao Zhang; Lianmin Zheng; Siyuan Zhuang; Yonghao Zhuang; Joseph E Gonzalez; Ion Stoica; Eric P Xing", "journal": "", "ref_id": "b4", "title": "Vicuna: An opensource chatbot impressing gpt-4 with 90%* chatgpt quality", "year": "2023" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann", "journal": "", "ref_id": "b5", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Mike Conover; Matt Hayes; Ankit Mathur; Jianwei Xie; Jun Wan; Sam Shah; Ali Ghodsi; Patrick Wendell; Matei Zaharia; Reynold Xin", "journal": "", "ref_id": "b6", "title": "Free dolly: Introducing the world's first truly open instructiontuned llm", "year": "2023" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b7", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Ning Ding; Yulin Chen; Bokai Xu; Yujia Qin; Zhi Zheng; Shengding Hu; Zhiyuan Liu; Maosong Sun; Bowen Zhou", "journal": "", "ref_id": "b8", "title": "Enhancing chat language models by scaling high-quality instructional conversations", "year": "2023" }, { "authors": "Biyang Guo; Xin Zhang; Ziyuan Wang; Minqi Jiang; Jinran Nie; Yuxuan Ding; Jianwei Yue; Yupeng Wu", "journal": "", "ref_id": "b9", "title": "How close is chatgpt to human experts? comparison corpus, evaluation, and detection", "year": "" }, { "authors": "Pengcheng He; Xiaodong Liu; Jianfeng Gao; Weizhu Chen", "journal": "", "ref_id": "b10", "title": "Deberta: Decoding-enhanced bert with disentangled attention", "year": "2021" }, { "authors": "Or Honovich; Thomas Scialom; Omer Levy; Timo Schick", "journal": "", "ref_id": "b11", "title": "Unnatural instructions: Tuning language models with (almost) no human labor", "year": "2022" }, { "authors": "Srinivasan Iyer; Xi Victoria Lin; Ramakanth Pasunuru; Todor Mihaylov; Daniel Simig; Ping Yu; Kurt Shuster; Tianlu Wang; Qing Liu; Punit Singh Koura; Xian Li; Brian O' Horo; Gabriel Pereyra; Jeff Wang; Christopher Dewan; Asli Celikyilmaz; Luke Zettlemoyer; Ves Stoyanov", "journal": "", "ref_id": "b12", "title": "Opt-iml: Scaling language model instruction meta learning through the lens of generalization", "year": "2023" }, { "authors": "Andreas Köpf; Yannic Kilcher; Sotiris Dimitri Von Rütte; Zhi-Rui Anagnostidis; Keith Tam; Abdullah Stevens; Barhoum; Minh Nguyen; Oliver Duc; Richárd Stanley; Nagyfi", "journal": "", "ref_id": "b13", "title": "Openassistant conversations-democratizing large language model alignment", "year": "2023" }, { "authors": "Ming Li; Yong Zhang; Zhitao Li; Jiuhai Chen; Lichang Chen; Ning Cheng; Jianzong Wang; Tianyi Zhou; Jing Xiao", "journal": "", "ref_id": "b14", "title": "From quantity to quality: Boosting llm performance with self-guided data selection for instruction tuning", "year": "2023" }, { "authors": "Xian Li; Ping Yu; Chunting Zhou; Timo Schick; Luke Zettlemoyer; Omer Levy; Jason Weston; Mike Lewis", "journal": "", "ref_id": "b15", "title": "Self-alignment with instruction backtranslation", "year": "2023" }, { "authors": "Shayne Longpre; Le Hou; Tu Vu; Albert Webson; Hyung Won Chung; Yi Tay; Denny Zhou; Quoc V Le; Barret Zoph; Jason Wei; Adam Roberts", "journal": "", "ref_id": "b16", "title": "The flan collection: Designing data and methods for effective instruction tuning", "year": "2023" }, { "authors": "Reiichiro Nakano; Jacob Hilton; Suchir Balaji; Jeff Wu; Long Ouyang; Christina Kim; Christopher Hesse; Shantanu Jain; Vineet Kosaraju; William Saunders; Xu Jiang; Karl Cobbe; Tyna Eloundou; Gretchen Krueger; Kevin Button; Matthew Knight; Benjamin Chess; John Schulman", "journal": "", "ref_id": "b17", "title": "Webgpt: Browserassisted question-answering with human feedback", "year": "2022" }, { "authors": "Jinjie Ni; Fuzhao Xue; Kabir Jain; Mahir Hitesh Shah; Zangwei Zheng; Yang You", "journal": "", "ref_id": "b18", "title": "Instruction in the wild: A user-based instruction dataset", "year": "2023" }, { "authors": " Openai", "journal": "", "ref_id": "b19", "title": "GPT-4 technical report", "year": "2023" }, { "authors": "Long Ouyang; Jeffrey Wu; Xu Jiang; Diogo Almeida; Carroll Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b20", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Ozan Sener; Silvio Savarese", "journal": "", "ref_id": "b21", "title": "Active learning for convolutional neural networks: A core-set approach", "year": "2017" }, { "authors": "Manli Shu; Jiongxiao Wang; Chen Zhu; Jonas Geiping; Chaowei Xiao; Tom Goldstein", "journal": "", "ref_id": "b22", "title": "On the exploitability of instruction tuning", "year": "2023" }, { "authors": "Nisan Stiennon; Long Ouyang; Jeff Wu; Daniel M Ziegler; Ryan Lowe; Chelsea Voss; Alec Radford; Dario Amodei; Paul Christiano", "journal": "", "ref_id": "b23", "title": "Learning to summarize from human feedback", "year": "2022" }, { "authors": "Zhiqing Sun; Yikang Shen; Qinhong Zhou; Hongxin Zhang; Zhenfang Chen; David Cox; Yiming Yang; Chuang Gan", "journal": "", "ref_id": "b24", "title": "Principle-driven selfalignment of language models from scratch with minimal human supervision", "year": "2023" }, { "authors": "Rohan Taori; Ishaan Gulrajani; Tianyi Zhang; Yann Dubois; Xuechen Li; Carlos Guestrin; Percy Liang; Tatsunori B Hashimoto", "journal": "", "ref_id": "b25", "title": "Stanford alpaca: An instruction-following llama model", "year": "2023" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar; Aurélien Rodriguez; Armand Joulin; Edouard Grave; Guillaume Lample", "journal": "", "ref_id": "b26", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Hugo Touvron; Louis Martin; Kevin Stone; Peter Albert; Amjad Almahairi; Yasmine Babaei; Nikolay Bashlykov; Soumya Batra; Prajjwal Bhargava; Shruti Bhosale", "journal": "", "ref_id": "b27", "title": "Llama 2: Open foundation and fine-tuned chat models", "year": "2023" }, { "authors": "Thuy-Trang Vu; Xuanli He; Gholamreza Haffari; Ehsan Shareghi", "journal": "", "ref_id": "b28", "title": "Koala: An index for quantifying overlaps with pre-training corpora", "year": "2023" }, { "authors": "Guan Wang; Sijie Cheng; Xianyuan Zhan; Xiangang Li; Sen Song; Yang Liu", "journal": "", "ref_id": "b29", "title": "Openchat: Advancing open-source language models with mixed-quality data", "year": "2023" }, { "authors": "Yizhong Wang; Hamish Ivison; Pradeep Dasigi; Jack Hessel; Tushar Khot; Raghavi Khyathi; David Chandu; Kelsey Wadden; Noah A Macmillan; Iz Smith; Hannaneh Beltagy; Hajishirzi", "journal": "", "ref_id": "b30", "title": "How far can camels go? exploring the state of instruction tuning on open resources", "year": "2023" }, { "authors": "Yizhong Wang; Yeganeh Kordi; Swaroop Mishra; Alisa Liu; Noah A Smith; Daniel Khashabi; Hannaneh Hajishirzi", "journal": "", "ref_id": "b31", "title": "Self-instruct: Aligning language model with self generated instructions", "year": "2022" }, { "authors": "Yizhong Wang; Swaroop Mishra; Pegah Alipoormolabashi; Yeganeh Kordi; Amirreza Mirzaei; Anjana Arunkumar; Arjun Ashok; Arut Selvan Dhanasekaran; Atharva Naik; David Stap; Eshaan Pathak; Giannis Karamanolakis; Gary Haizhi; Ishan Lai; Ishani Purohit; Jacob Mondal; Kirby Anderson; Krima Kuznia; Maitreya Doshi; Kuntal Patel; Mehrad Kumar Pal; Mihir Moradshahi; Mirali Parmar; Neeraj Purohit; Varshney; Rohitha Phani; Pulkit Kaza; Ravsehaj Verma; Rushang Singh Puri; Karia; Keyur Shailaja; Savan Sampat; Siddhartha Doshi; Sujan Mishra; Sumanta Reddy; Tanay Patro; Xudong Dixit; Chitta Shen; Yejin Baral; Noah A Choi; Hannaneh Smith; Daniel Hajishirzi; Khashabi", "journal": "", "ref_id": "b32", "title": "Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks", "year": "2022" }, { "authors": "Yufei Wang; Wanjun Zhong; Liangyou Li; Fei Mi; Xingshan Zeng; Wenyong Huang; Lifeng Shang; Xin Jiang; Qun Liu", "journal": "", "ref_id": "b33", "title": "Aligning large language models with human: A survey", "year": "2023" }, { "authors": "Jason Wei; Maarten Bosma; Y Vincent; Kelvin Zhao; Adams Wei Guu; Brian Yu; Nan Lester; Andrew M Du; Quoc V Dai; Le", "journal": "", "ref_id": "b34", "title": "Finetuned language models are zero-shot learners", "year": "2021" }, { "authors": "Can Xu; Qingfeng Sun; Kai Zheng; Xiubo Geng; Pu Zhao; Jiazhan Feng; Chongyang Tao; Daxin Jiang", "journal": "", "ref_id": "b35", "title": "Wizardlm: Empowering large language models to follow complex instructions", "year": "2023" }, { "authors": "Yue Yu; Yuchen Zhuang; Jieyu Zhang; Yu Meng; Alexander Ratner; Ranjay Krishna; Jiaming Shen; Chao Zhang", "journal": "", "ref_id": "b36", "title": "Large language model as attributed training data generator: A tale of diversity and bias", "year": "2023" }, { "authors": "Susan Zhang; Stephen Roller; Naman Goyal; Mikel Artetxe; Moya Chen; Shuohui Chen; Christopher Dewan; Mona Diab; Xian Li; Xi Victoria Lin", "journal": "", "ref_id": "b37", "title": "Opt: Open pre-trained transformer language models", "year": "2022" }, { "authors": "Chunting Zhou; Pengfei Liu; Puxin Xu; Srini Iyer; Jiao Sun; Yuning Mao; Xuezhe Ma; Avia Efrat; Ping Yu; Lili Yu", "journal": "", "ref_id": "b38", "title": "Lima: Less is more for alignment", "year": "2023" } ]
[ { "formula_coordinates": [ 5, 70.87, 203.56, 162.46, 79.43 ], "formula_id": "formula_0", "formula_text": "Initialize s = s 0 repeat u = argmax i∈[n]\\s min j∈s ∆(x i , x j ) s = s ∪ u Until |s| = b + |s 0 | return s\\s 0" } ]
2023-11-27
[ { "figure_ref": [ "fig_2" ], "heading": "Introduction", "publication_ref": [ "b2", "b12", "b40", "b33", "b31", "b32", "b35", "b44", "b45", "b18", "b10", "b17", "b33" ], "table_ref": [], "text": "Generative models have witnessed notable advancements in recent years, transitioning from earlier Generative Adversarial Networks (GAN) [3,13,20] to the more recent diffusion models [9,17,41]. Text-to-image models like Stable Diffusion [34], DALLE [32,33], and Imagen [36], trained on extensive datasets, have demonstrated impressive capabilities in producing high-quality images from textual prompts. However, these diffusion models primarily optimize the log-likelihood objective, which, although effective for generative tasks, may not consistently fulfill specific re- quirements for downstream applications. Key challenges include achieving desirable image aesthetics and aligning generated images with text descriptions, both of which are critical for applications in areas such as content generation and multimedia synthesis.\nAlthough prompt engineering is helpful in some cases, such as Fig. 2, these techniques [15, 45,46] have inherent limitations, including a lack of precise control, limited generalization across various models, and inadequacy in addressing complex demands. For instance, current models encounter difficulties in generating visual texts [25], and comprehending object counts [19,23]. Therefore, recent efforts draw inspiration from the success of reinforcement learning from human feedback (RLHF) employed in large language models [28] and adopt similar strategies to enhance the alignment capabilities of diffusion models [2, 11,23,30]. While these methods show promise, they all finetune the U-Net conditioned on the fixed suboptimal text encoder, which constrains their efficacy.\nIn this paper, we introduce TexForce, an innovative method that applies reinforcement learning combined with low-rank adaptation to enhance the text encoder using taskspecific rewards. We utilize the DDPO (denoising diffusion policy optimization) [2] algorithm to update the text encoder, which is based on PPO (proximal policy optimization [39]) in the iterative denoising process. Unlike direct backpropagation, this RL algorithm does not require differ-entiable rewards, offering greater flexibility. By finetuning with LoRA [18], TexForce can adapt to diverse tasks by simply switching the LoRA weights, and also allows for the fusion of different LoRA weights to combine the capabilities learned from different rewards. Most importantly, Tex-Force can be seamlessly integrated with existing finetuned U-Net models from previous methods and achieves much better performance without additional training.\nAs illustrated in Fig. 1, our approach significantly enhances the result quality of Stable Diffusion 1.4 (SDv1.4) [34] when finetuned to align with different rewards. We validate our approach through extensive experiments on both single-prompt and multi-prompt tasks across various reward functions. In Sec. 4.2, we provide empirical analysis of the difference between finetuning the text encoder and U-Net. In Sec. 4.3, we conduct comprehensive comparisons with other methods and show that our method can be directly combined with existing methods to achieve state-of-the-art performance. Results with different backbones are also presented in Sec. 4.4. In Sec. 4.6, we demonstrate the adaptability of our method across various applications, including the generation of high-quality face and hand images." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Text-to-Image Diffusion Models", "publication_ref": [ "b39", "b41", "b42", "b33", "b3", "b48", "b28", "b33", "b31", "b32", "b35", "b34", "b11", "b51", "b10" ], "table_ref": [], "text": "Denoising diffusion models [17,40,42,43] have become the de facto standard for generative tasks, owing to their remarkable capabilities in generating diverse multimedia content, including images [9,34], videos [14,21,49], 3D content [26,29], and more. Text-to-image models, particularly those creating images based on textual prompts, have gained significant traction attributable to the availability of powerful models such as StableDiffusion [34], DALLE [32,33] and Imagen [36]. Several approaches have emerged to enhance control over texture details in the generated outputs. Noteworthy methods like DreamBooth [35] and Texture Inversion [12] offer tailored solutions for specific image requirements. To improve the generalization capabilities, ControlNet [52] and T2I-Adapter [27] introduce additional image encoders to control the structure and details. Nonetheless, they still require a large number of paired images to train, and may struggle to meet the diverse demands of various tasks. Prompt engineering [15] is another popular approach aimed at enhancing the quality of generated images. However, this method is constrained by the expressiveness of text prompts and pretrained models and may not be straightforward when addressing complex tasks such as aesthetic quality and object composition [11,23]." }, { "figure_ref": [], "heading": "Learning from Feedback in Diffusion Models", "publication_ref": [ "b9", "b7", "b10", "b50", "b6" ], "table_ref": [], "text": "Recent efforts have aimed to optimize diffusion models directly using human rewards or task objectives, typically cat-egorized into three main approaches: reward-weighted regression (RWR), reinforcement learning (RL), and direct backpropagation. RWR methods like RAFT [10], Lee et al.\n[23], and Emu [8] start by assessing image quality with human feedback and then re-weight or select high-quality examples to enhance overall performance. RL methods, exemplified by DDPO [2] and DPOK [11], treat the denoising process as a Markov decision process and optimize the model using RL algorithms, such as PPO [39]. Direct backpropagation methods, including AlignProp [30], ReFL [51], and DRaFT [7], propagate gradients directly from the reward function to the model. Because these models only finetune U-Net conditioned on the suboptimal text encoder, their effectiveness in aligning outputs with text prompts is often limited. Our proposed TexForce complements these approaches and demonstrates significant improvements." }, { "figure_ref": [], "heading": "Quality Metrics for Generative Models", "publication_ref": [ "b23", "b50", "b50", "b5", "b33" ], "table_ref": [], "text": "With the increasing popularity of generative models, several benchmarks [22,24,47,50,51] have been developed to evaluate the quality of generated images. Notably, Im-ageReward [51], PickScore [22], and HPS [50] are among the more frequently employed benchmarks. Additionally, image aesthetic metrics, particularly the LAION Aesthetics Predictor [6], find widespread application in data filtering [34] and results assessment." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Preliminaries on Diffusion Models", "publication_ref": [ "b33", "b33", "b30" ], "table_ref": [], "text": "Diffusion models [17,34] belong to the class of generative models that leverage noise-driven processes to progressively transform data distributions. This process contains a controlled noise addition phase (forward diffusion) and a noise removal phase (reverse diffusion). Given image samples x 0 originating from the data distribution q(x 0 ), the forward diffusion process generates a sequence of images {x t } T t=1 by iteratively introducing noise via a Markov chain with a predefined noise schedule. Then, the reverse diffusion process is to learn a denoising U-Net ϵ θ to estimate the cleaner x t-1 with the noisy x t :\np θ (x t-1 |x t ) = N (x t-1 ; µ θ (x t , t), Σ θ (x t , t)), (1\n)\nwhere θ is the learnable parameter. For text-to-image diffusion models [34], this process is conditioned on a text input s, encoded with a text encoder z = τ ϕ (s). Then the network ϵ θ is trained with the following objective:\nL(θ) = E xt,s,t,ϵ∼N (0,I) ∥ϵ -ϵ θ (x t , t, z)∥ 2 2 ,(2)\nwhich aims to optimize the variational lower bound on the log-likelihood of the data distribution q(x 0 ). It is worth noting that the text encoder τ ϕ is usually a pretrained model, such as CLIP [31], and is fixed during training. " }, { "figure_ref": [], "heading": "Reinforcement Learning with LoRA", "publication_ref": [ "b17", "b17" ], "table_ref": [], "text": "According to the above formulation, diffusion models are learnt to optimize the log-likelihood objective Eq. (2), which is not directly related to the task requirements. To address this issue, we propose to finetune the text encoder τ ϕ with reinforcement learning (RL). Details come as follows.\nRL in Diffusion Models. In our setting, the RL framework optimizes the policy defined by the diffusion model conditioned on the text embeddings from the text encoder. The text encoder τ ϕ acts as the policy network that maps text descriptions to actions (text embeddings), which then influences the generative process of the diffusion model. Let R be the reward function that evaluates the quality of the generated images, which could encapsulate various aspects, such as image-text alignment and image quality, and adherence to specific attributes desired in the output. Then the objective of RL is to maximize the expected reward:\nJ(ϕ) = E [R(x 0 , s)] .(3)\nSince the denoising process can be formulated as a Markov decision process [2], i.e., p θ (x 0 |z) = p(x T ) T t=1 p θ (x t-1 |x t , z), the policy gradient of Eq. (3) can be computed as:\n∇ ϕ J = E T t=0 ∇ ϕ log p θ (x t-1 |x t , τ ϕ (s))R(x 0 , s) . (4)\nFollowing the DDPO algorithm, we use the Proximal Policy Optimization (PPO) [39] to keep stable learning dynamics. It applies importance sampling with clipped probability ratio to Eq. (3) which becomes:\nJ = E [min(r t (ϕ)A, clip(r t (ϕ), 1 -λ, 1 + λ)A)] , (5)\nwhere the advantage value A is the normalized rewards R over a buffer set of x 0 , and r t is the probability ratio between the new policy and the old policy for the denoise step p θ (x t-1 |x t , τ ϕ (s)). Since the policy is an isotropic Gaussian, the probability can be easily calculated. Then, we can calculate the gradient for Eq. (5) similar to Eq. (4) to update the policy network τ ϕ . More details are provided in supplementary materials.\nLow-Rank Adaptation (LoRA) [18] is a technique that allows for the modification of large pre-trained models without the need for extensive re-training. It achieves this by inserting trainable low-rank matrices into the original feedforward layer as W ′ = W +α∆W , where ∆W is the learnable weights initialized to zero and α is a scale factor. Such low-rank weight matrices are shown to be helpful in preventing the model from overfitting to the training data [18]." }, { "figure_ref": [], "heading": "Discussion of Finetuning for Diffusion Model", "publication_ref": [ "b36", "b6", "b50", "b6" ], "table_ref": [], "text": "In this part, we briefly discuss the advantages of finetuning the text encoder with reinforcement learning to improve the performance of diffusion models.\nFinetune of Diffusion Model. As discussed in Sec. 3.1, given the text s and x 0 , the denoising network ϵ θ is learned by maximizing the following lower bound:\nE z∼q ϕ (z|s) [log(p θ (x 0:T |z))] -D KL (q ϕ (z|s)||p(z)). (6)\nIn the training stage of diffusion models, ϕ is usually fixed and p θ (x 0 |z) are learned through classifier free guidance [16]. With an extremely large amount of s in datasets such as LAION-400M [37,38], it is reasonable to assume that q ϕ is a good estimation of p(z) even when ϕ is fixed. However, in the finetuning stage, we expect to use a small amount of s to optimize Eq. ( 6) for specific tasks. In such cases, the q ϕ is likely to be a suboptimal estimation of p(z), and thus largely increasing the second KL term. Therefore, we believe that it is necessary to finetune the text encoder τ ϕ to minimize the second term when the finetune dataset is limited. RL v.s. Direct Backpropagation. Besides reinforcement learning, recent approaches also directly backpropagate the gradients through the denoising steps [7,30,51]:\n∇ θ L = n m ∂R ∂xt ∂xt ∂θ , where m ≤ n ∈ [0, T ].\nHowever, this approach is more likely to overfit the reward function and lead to mode collapse. For instance, in DRaFT [7], the model may collapse to generate a single image to achieve high aesthetic rewards. Besides, RL does not require differentiable quality rewards, and is much more flexible than direct backpropagation. For example, current applications can collect human feedbacks and use them as rewards to directly finetune the model. " }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_3", "fig_3", "fig_3" ], "heading": "Finetuning Text Encoder v.s. U-Net", "publication_ref": [], "table_ref": [], "text": "In this section, we will empirically analyze the difference between finetuning the text encoder and U-Net, through the incompression task as introduced in DDPO [2]. It aims to enhance the complexity of generated images through reinforcement learning (RL). The reward for this task is assessed based on the image size after JPEG compression with a quality factor of 95. Given its objective nature and the ambiguity of possible solutions, this task is suitable for analyzing behaviors of different models when optimized for the reward. We finetune both the text encoder and the U-Net with LoRA using the simple animal prompts.\nFigure 4 shows the results comparison between finetun- ing text encoder and U-Net with LoRA. By comparing models with the same incompression score, we can have the following observations:\n• U-Net tends to change the visual appearance to increase the reward, whereas the text encoder introduces novel visual concepts to attain the same objective. As shown in Fig. 4, despite having comparable incompression scores, the outcomes from the text encoder are more coherent than those from the U-Net. However, this also makes the optimization of the text encoder more challenging and time-consuming. • We can directly combine the LoRA weights from Tex-Force and U-Net to achieve even better results. As shown in Fig. 4, the results in the third column achieve the highest incompression score and still maintain a similar visual structure from the first column, successfully combining advantages from the LoRA weights of both the text en-coder and U-Net. It is worth noting that this is achieved without additional training." }, { "figure_ref": [ "fig_5", "fig_7" ], "heading": "Comparison with Existing Works", "publication_ref": [ "b10", "b10" ], "table_ref": [], "text": "Results on Different Individual Prompts. As demonstrated in prior work [11,23], existing stable diffusion models exhibit misalignment with simple text prompts, such as color consistency (e.g., A green colored dog.) and combination of objects (e.g., A cat and a dog). Therefore, we start with these simple individual scenarios to highlight the advantages of our method. We follow the experimental settings of DPOK [11], and conduct our experiments with four different capabilities: color consistency, object composition, object count, and object location, as shown in 50 samples with the same random seeds. Figure 5 presents a comprehensive overview of both quantitative and qualitative results for seen and unseen prompts. The results show that our method can generate more consistent images with better quality than the original SDv1.4 and DPOK. For instance, we can see that the results of TexForce are more consistent with the prompts, such as the color of the rabbit and cat, the number of birds, and the location of the dog. Besides, the results of TexForce are more realistic than DPOK and SDv1.4. Quantitatively, Tex-Force attains better average ImageReward scores for both seen and unseen prompts. This underscores the overall superiority of TexForce over DPOK and SDv1.4 across multiple samples. Moreover, the combination of TexForce and DPOK demonstrates the best performance in terms of both ImageReward scores and visual quality. This demonstrates the flexibility of our method, which can be seamlessly integrated with existing methods to achieve state-of-the-art performance without additional training. We retrained with ReFL and AlignProb with official codes, and the results are shown in Figs. 6 and7. As we can observe, since ReFL is trained with a single step backward to update the U-Net, it is less effective than the proposed TexForce to align the input prompts with generated images, such as the white shirt of the fox. The improvement of ReFL is mainly the appearance of the generated images, such as the color of the man and the texture of the fox. Meanwhile, the proposed TexForce is better at aligning the text prompts with generated images, which makes TexForce better in Tab. 1. Furthermore, when merging the strengths of TexForce and ReFL, we observe a notable improvement in both quantitative results and visual appearance. Similarly, we compared our method with AlignProb using the HPSv2 reward. The results from AlignProb appeared overly optimized towards the rewards, evident in the abundance of yellow spotlights and clouds, leading to disrupted semantics. In contrast, our proposed TexForce primarily aims to improve the text-image alignment. While our reward score was slightly lower than AlignProb due to limited changes in color style, our results better match the textual prompts in terms of semantics. Additionally, our combined results maintain the visual styles preferred by the HPSv2 while preserving the meaning of the text prompts, resulting in significant improvement over AlignProb." }, { "figure_ref": [], "heading": "Results on Complex Long Prompts", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_8" ], "heading": "Experiments with Different Backbones", "publication_ref": [], "table_ref": [], "text": "To show the robustness of our method, we conduct experiments with more different backbones, including SDv1.5 and SDv2.1. We use the ImageReward score and prompts dataset to train all the models, and the results are shown in Fig. 8 and Tab. 2. Notably, our method consistently improves the text-image alignment of the original models. For instance, the results generated by TexForce exhibit enhanced visual appeal and better consistency with https : / / huggingface . co / runwayml / stablediffusion-v1-5runwayml/stable-diffusion-v1-5 https : / / huggingface . co / stabilityai / stablediffusion-2-1 the prompts, such as the victorian lady, old king, and atom model. It is also worth noting that although SDv2.1 is already much better than SDv1.5, TexForce continues to augment the performance. This demonstrates the adaptability and robustness of our method when employed with different backbones. Although ReFL achieves higher ImageReward scores, our observations indicate that it primarily enhances color and fine details and is less effective than TexForce in aligning images with text prompts. For both SDv1.5 and SDv2.1, the combined model yields the best performance, which clearly affirms the effectiveness of TexForce." }, { "figure_ref": [ "fig_13" ], "heading": "GPT-4V Evaluation", "publication_ref": [ "b47" ], "table_ref": [], "text": "As GPT-4V has recently shown to be comparable with human-level performance in evaluating image quality [47,48], we decide to rely on GPT-4V evaluations instead of traditional user studies, which may be inconsistent and hard to reproduce. Our approach involves using GPT-4V to rank image quality based on aesthetic quality and coherence with text. In Fig. 13, we present the average scores from three rounds of evaluations using the ImageReward test dataset. We can see that our TexForce method does a better job at aligning text with images in diffusion models, while ReFL improves the appearance of the images. Combining both approaches successfully takes advantage of both of them and yields the best results. Please refer to the supplementary material to reproduce the results.\nFace Quality Hand Detection Confidence" }, { "figure_ref": [ "fig_10", "fig_11" ], "heading": "Applications", "publication_ref": [ "b0" ], "table_ref": [], "text": "TexForce demonstrates remarkable adaptability to diverse tasks, as it does not require differentiable rewards. In this section, we showcase its capabilities in enhancing the quality of generated face and hand images.\nFace reward. We employ the face quality evaluation metric from [4], which is based on an image quality evaluation network [5] trained using the face quality dataset [44].\nHand reward. Regarding the hand quality evaluation, we recognize the absence of specific hand quality metrics. Instead, we employ a straightforward hand detection confidence score as a reward function and observe its utility. The hand detection model from [1] is used to calculate the confidence score. Figure 10 illustrates the progressive improvement in the quality of generated face and hand images over the course of training. These results illustrate the capacity of TexForce to enhance image quality, utilizing either direct quality metrics or a simple confidence score.\nMoreover, by utilizing LoRA weights for fine-tuning the text encoder, we find that it is feasible to blend specific LoRA weights to enhance the quality of specific objects. Suppose the LoRA weight θ i from i-th task, we can simply fuse them via i α i θ i . In Fig. 11, we demonstrate how the fusion of ImageReward LoRA weights and face quality LoRA weights can produce high-quality face images. This flexibility significantly broadens the range of potential applications for TexForce." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we introduce a new method called TexForce for enhancing the text encoder of diffusion models using reinforcement learning. Our research demonstrates that refining the text encoder can enhance the overall performance of diffusion models, specifically in terms of aligning text and images as well as improving visual quality. Furthermore, we illustrate that TexForce can be seamlessly integrated with existing U-Net models that have undergone fine-tuning, without the need for extra training, resulting in significant performance improvements. Lastly, we showcase the versatility of our approach across various applications, including the generation of high-quality images of faces and hands. We also provide evidence that the finetuned LoRA weights with different tasks can be combined to enhance the specific quality of image generation. Limitations. Similar to other RL-based methods, our method also faces the challenges of sample efficiency and the complexity of reward function engineering. Broader Impacts. Since TexForce can finetune text-toimage models to satisfy specific rewards, it presents potential societal concerns regarding misinformation, intellectual property rights, and illegal usage of the model. " }, { "figure_ref": [], "heading": "B.2. Evaluation Details", "publication_ref": [], "table_ref": [], "text": "We use the seed everything() function provided by PyTorch Lightning to set the random seed for all the experiments. We set the start random seed to 234 for ALL prompts.\nFor the single prompt dataset, we generate 50 examples for each prompt. For the multi-prompts dataset, ImageReward and HPSv2, we generate only one example for each prompt to save time." }, { "figure_ref": [], "heading": "C. GPT4V Evaluation", "publication_ref": [], "table_ref": [], "text": "We use the gpt-4-1106-vision-preview API provided by OpenAI to evaluate the quality of the generated images. The API takes a text prompt and a list of images as input, and returns a rank of the image names based on their aesthetic quality and their coherence with the prompt. Below is an example we used for the evaluation:" }, { "figure_ref": [ "fig_13" ], "heading": "GPT4V Evaluation Example (gpt-4-1106-vision-preview) Prompt (#User):", "publication_ref": [], "table_ref": [], "text": "The following four images, Image 1, Image 2, Image 3 and Image 4, are generated with the prompt [alien landscape with futuristic portal to another alien planet, astronaut stepping through the portal], please rank them based on their aesthetic quality and their coherence with the given prompt. Answer with only two lists with the image file names, from good to bad. Please only give two lists. As shown in the example above, GPT4V returns two lists of the image names, ranking from good to bad. We assign score 3 for the best image, 0 for the worst image and the final score is normalized to [0, 1]. For reliability, we run the evaluation for 3 times and report the average score. The results for each round and the final test score is reported in Fig. 13." }, { "figure_ref": [ "fig_14", "fig_2", "fig_2" ], "heading": "D. Additional Experiments D.1. More Results of Incompression Task", "publication_ref": [ "b5", "b9" ], "table_ref": [], "text": "In the main paper, we briefly discussed different behaviors of finetuing U-Net and text encoder in the incompression task. Here, we provide more quantitative and qualitative results about the comparison between finetuning U-Net and text encoder, with the unseen animal prompts below: Animal prompts. Following previous works, we use the 45 simple animals for training. The prompt is defined as A photo of a <animal>. We use the following animals for testing: cheetach, elephant, girraffe, hippo, jellyfish, panda, penguin, swan.\nWe generate 10 samples for each prompt and report their incompression scores in Tab. 4. We can have the following observations:\n• Although the training rewards of text encoder and U-Net are similar, the text encoder is more robust in unseen animals than U-Net, and obtained higher incompression scores. • The quantitative results also confirm that simply combining U-Net and text encoder is quite effective in improving the reward scores. Figure 14 shows visual examples of combining U-Net and text encoder at different checkpoints. We can observe that the text encoder can introduce extra reasonable visual concepts to the image to increase complexity, however, U-Net mainly changes the appearance and is easy to disrupt the original structure such as the hippo head. Aesthetic rewards. Same as previous works, we also conduct experiments using the LAION Aesthetics Predictor [6] as the aesthetic reward function. It should be noted that the aesthetic reward does not consider the coherence with the prompt, and it is easy for the model to hack the reward as shown in [10].\nWe compare results with DDPO and AlignProb in Fig. 15. Both DDPO and our approach are trained with 10K samples, while AlignProb used early stop to avoid model collapse. Figure 15 presents results for three distinct animals: jellyfish, penguin, and swan. We can observe that both DDPO and AlignProb are over-optimized to the rewards, resulting in over stylized images to maximize aesthetic scores. For instance, DDPO tends to produce images characterized by a blurred yellow background, and AlignProb tends to generate images with over-saturated colors and unrealistic textures. In contrast, our method can generate more lifelike images that closely align with the provided prompts. For example, the composition of the jellyfish image is aesthetically pleasing, and the unrealistic features of the penguin and swan images have been rectified." }, { "figure_ref": [ "fig_5", "fig_7" ], "heading": "D.3. Experiments of Joint Finetune", "publication_ref": [], "table_ref": [], "text": "We also conduct experiments to jointly finetune the U-Net and text encoder using LoRA with the ImageReward dataset. The training rewards are shown in Fig. 16, and the testing results are presented in Tab. 5. We can notice that although result of joint finetuning is better than ReFL and TexForce, it is still inferior to simple combination of them. We hypothesis this is because the joint optimization is much more difficult that separate training. In Fig. 17, we can observe that when only consider one aspect of text-image alignment and aesthetic quality, results of joint training are slightly worse than TexForce and ReFL respectively. This may suggest that separate training is better than joint training. " }, { "figure_ref": [], "heading": "D.4. Training ReFL with LoRA", "publication_ref": [], "table_ref": [], "text": "To make our model easier to use, we modified the original ReFL to train LoRA weights instead of the entire U-Net, and the results are shown in Tab. 6. We can notice that the performance of ReFL-LoRA is similar to ReFL but much better when combining with TexForce." }, { "figure_ref": [], "heading": "E. More Qualitative Results", "publication_ref": [], "table_ref": [], "text": "In this section, we select more examples on different backbones in Figs. 18 to 23. " }, { "figure_ref": [], "heading": "A. Method Details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1. Reinforcement Learning Formulations", "publication_ref": [], "table_ref": [], "text": "We first give the formulations of each term for the PPO algorithm in diffusion models. According to the DDPM paper [17], the t step denoising can be formulated as:\nwhere ᾱt = t i=1 α i , σ 2 t = 1-ᾱt-1 1-ᾱt β t , ϵ ∼ N (0, I), and β t is a predefined variance schedule. Since the U-Net parameter ϕ is fixed during the training, we omit it in the following formulations. The objective function of the PPO algorithm is:\nwhere\nSince p ϕ is an isotropic Gaussian distribution, we have:\nThe reward value A is obtained with reward function A = R(x 0 , s). With equations above, we can get J(ϕ)." }, { "figure_ref": [], "heading": "A.2. Face and Hand Rewards", "publication_ref": [], "table_ref": [], "text": "Figure 12 shows the pipeline of face reward and hand reward. For the face quality reward, we finetuned the TOPIQ model [5] with the GFIQA-20k dataset [44] which is specifically designed for face quality. Since the faces of GFIQA are all aligned, we also need to align the generated face before calculating the reward scores. For hand reward function, there is no existing quality model for hands. We found that simple hand detection confidence score can already give reliable reward for the generation quality of hands. Therefore, we directly use the hand detection confidence as rewards. Thanks to the flexibility of RL, the reward function is not required to be differentiable. We use the pretrained YOLOv5 for hand detection. " } ]
a) SDv1.4 (b) TexForce (SDv1.4) (c) SDv1.4 (d) TexForce (SDv1.4) Figure 1. By refining the text encoder through reinforcement learning, the proposed TexForce with Stable Diffusion v1.4 can generate images that align better with human quality preference. The compared images are generated with the same seed and prompts. (a)(b): "Impressionist painting of a cat, high quality"; (c)(d): "A photo of a hand" & "A complete face of a man".
Enhancing Diffusion Models with Text-Encoder Reinforcement Learning
[ { "figure_caption": "Figure 2. The qualities of outputs from pretrained diffusion models vary a lot with different prompts. Through the reinforcement learning, we can finetune text encoder to better align with high quality images.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Illustration of text encoder finetune with PPO algorithm.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "4. 1 .1Implementation DetailsPrompt Datasets. We follow previous works [2, 11, 30, 50, 51] and use three types of prompt datasets with their corresponding experimental settings:", "figure_data": "", "figure_id": "fig_2", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Comparison of training progress between finetuning text encoder and U-Net with LoRA. The image size after JPEG compression is marked on the top-left corner as \"kb\". • Simple animal prompts [2]. A simple dataset with a curated list of 45 common animals for training. • Single phrases. Four single phrases from DPOK [11] to test the model capabilities under different scenarios. • Complex long prompts. Subsets from ImageReward [51] and HPSv2 [50]. The former contains 20, 000 prompts for training and 100 for testing. The latter contains 750 prompts for training, 50 for testing. • Specific task prompts. Example task prompts for face and hand images. Reward Functions. We conduct experiments with different kinds of reward functions as below: • Text-to-Image Rewards. These rewards are trained on text-to-image datasets, such as ImageReward [51] and HPSv2 [50]. • Specific task rewards. Following [2], we evaluate model performance for the compression and incompression. Besides, we also design specific rewards for face and hand. Please refer to the supplementary materials for more training details and hyper-parameter settings.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5. Qualitative and quantitative comparisons with SDv1.4 and DPOK on individual scenarios. Images for comparison are generated with the same random seed. The results show that TexForce can generate more consistent images with better quality than SDv1.4 and DPOK, and simple combination of DPOK and TexForce gives even better performance without any additional training.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Visual comparison with ReFL on ImageReward dataset using real user prompts.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Next, we conduct ex-", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Visual comparison with AlignProb on HPSv2 dataset using real user prompts.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Results with different backbones on ImageReward test dataset.", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 .9Figure 9. GPT4V evaluation for aesthetic quality and text-image coherence with ImageReward testset and SDv1.4.", "figure_data": "", "figure_id": "fig_9", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 .10Figure 10. Two example applications of TexForce: high-quality face and hand generation.", "figure_data": "", "figure_id": "fig_10", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 .11Figure 11. Fusion of ImageReward LoRA weights and face quality LoRA weights. Prompt: A realistic portrait photo.", "figure_data": "", "figure_id": "fig_11", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 13 .13Figure 13. Complete results of GPT4V evaluation with SDv1.4 backbone on the ImageReward dataset.", "figure_data": "", "figure_id": "fig_13", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 .14Figure 14. Visual examples of combining U-Net and text encoder at different checkpoints in the incompression task.", "figure_data": "", "figure_id": "fig_14", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 .Figure 16 .Figure 17 .Figure 18 .Figure 19 .1516171819Figure 15. Comparison with others on unseen animal prompts: jellyfish, penguin and swan.", "figure_data": "", "figure_id": "fig_15", "figure_label": "1516171819", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Quantitative results on ImageReward and HPSv2. Results are tested with the same seed and prompts.", "figure_data": "MethodImageRewardMethodHPSv2SDv1.40.2154SDv1.40.2752ReFL0.4485AlignProb0.2821TexForce0.4556TexForce0.2767ReFL + TexForce0.6553AlignProb + TexForce0.2914periments using larger dataset with complex long prompts,i.e., the ImageReward dataset [51] and HPS dataset [50].", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Backbone OriginalReFLTexForce ReFL+TexForceSDv1.50.21400.54840.40860.6703SDv2.10.38910.52230.50840.6158", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Hyper-parameters and training settings for single prompt and multi-prompts datasets.As shown in Tab. 3, we use the same hyper-parameters for all the experiments. The only difference is the batch size and the number of samples per epoch. For the single prompt dataset, we use a batch size of 8 and 256 samples per epoch. For the multi-prompts dataset, we use a batch size of 64 and 2048 samples per epoch.", "figure_data": "Hyper-parametersSingle promptMultiple promptsSamplerDDIM [41]DiffusionGuidance Scale7.5Sampling Steps50TypeAdamWLearning rate3e-4OptimizerWeight decay1e-4(β 1 , β 2 )(0.9, 0.999)Gradient clip1.0RL ConfigRatio clip (γ) Advantage clip1e-4 10Rank16LoRAAlpha (α)1Moduleq, k, v, outTrainable #Params.1.18MNumerical precisiontorch.float16Batch size864TrainingSamples per epoch2562048Total epochs100100GPUs1 V1004 A100Time∼2 days∼3 daysB. Experiment SettingsB.1. Training Details", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Quantitative results of combining U-Net and text encoder in the incompression task. The comparison checkpoints are the epochs for U-Net and text encoder to achieve the same evaluation reward.", "figure_data": "Comparison checkpoints010, 7020, 10030, 120Finetune U-Net107.16143.02174.15Finetune text encoder84.01109.48156.10202.79Fusion126.59207.80250.18D.2. Results of Aesthetic Reward with Animal Prompts", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Quantitative comparison between simple fusion and joint training.", "figure_data": "MethodsSDv1.4ReFLTexForceReFL + TexForceJoint-LoRAImageReward Score0.21540.44850.45560.65530.5009", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Quantitative results of ReFL-LoRA on the ImageReward dataset.", "figure_data": "BackboneOriginalReFLReFL-LoRATexForceReFL+TexForceReFL-LoRA+TexForceSDv1.40.21540.44850.44250.45560.65530.7093SDv1.50.21400.54840.55580.28750.62780.7438SDv2.10.38910.52230.51810.43270.52500.6075", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" } ]
Chaofeng Chen; Annan Wang; Haoning Wu; Liang Liao; Wenxiu Sun; Qiong Yan; Weisi Lin
[ { "authors": "Mohammad Mahmudul; Alam ; Mohammad Tariqul Islam; S M Mahbubur Rahman", "journal": "Pattern Recognit", "ref_id": "b0", "title": "Unified learning approach for egocentric hand gesture recognition and fingertip detection", "year": "2021" }, { "authors": "Kevin Black; Michael Janner; Yilun Du; Ilya Kostrikov; Sergey Levine", "journal": "", "ref_id": "b1", "title": "Training diffusion models with reinforcement learning", "year": "2023" }, { "authors": "Andrew Brock; Jeff Donahue; Karen Simonyan", "journal": "", "ref_id": "b2", "title": "Large scale gan training for high fidelity natural image synthesis", "year": "2018" }, { "authors": "Chaofeng Chen; Jiadi Mo", "journal": "", "ref_id": "b3", "title": "IQA-PyTorch: Pytorch toolbox for image quality assessment", "year": "2022" }, { "authors": "Chaofeng Chen; Jiadi Mo; Jingwen Hou; Haoning Wu; Liang Liao; Wenxiu Sun; Qiong Yan; Weisi Lin", "journal": "", "ref_id": "b4", "title": "Topiq: A top-down approach from semantics to distortions for image quality assessment", "year": "2023" }, { "authors": "Schuhmann Christoph; Romain Beaumont", "journal": "Laionaesthetics", "ref_id": "b5", "title": "", "year": "2022" }, { "authors": "Kevin Clark; Paul Vicol; Kevin Swersky; David J Fleet", "journal": "", "ref_id": "b6", "title": "Directly fine-tuning diffusion models on differentiable rewards", "year": "2023" }, { "authors": "Xiaoliang Dai; Ji Hou; Chih-Yao Ma; Sam Tsai; Jialiang Wang; Rui Wang; Peizhao Zhang; Simon Vandenhende; Xiaofang Wang; Abhimanyu Dubey", "journal": "", "ref_id": "b7", "title": "Emu: Enhancing image generation models using photogenic needles in a haystack", "year": "2023" }, { "authors": "Prafulla Dhariwal; Alexander Nichol", "journal": "", "ref_id": "b8", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "Hanze Dong; Wei Xiong; Deepanshu Goyal; Rui Pan; Shizhe Diao; Jipeng Zhang; Kashun Shum; Tong Zhang", "journal": "", "ref_id": "b9", "title": "Raft: Reward ranked finetuning for generative foundation model alignment", "year": "2023" }, { "authors": "Ying Fan; Olivia Watkins; Yuqing Du; Hao Liu; Moonkyung Ryu; Craig Boutilier; Pieter Abbeel; Mohammad Ghavamzadeh; Kangwook Lee; Kimin Lee", "journal": "", "ref_id": "b10", "title": "Dpok: Reinforcement learning for fine-tuning text-to-image diffusion models", "year": "2023" }, { "authors": "Rinon Gal; Yuval Alaluf; Yuval Atzmon; Or Patashnik; H Amit; Gal Bermano; Daniel Chechik; Cohen-Or", "journal": "", "ref_id": "b11", "title": "An image is worth one word: Personalizing text-toimage generation using textual inversion", "year": "2022" }, { "authors": "Ian Goodfellow; Jean Pouget-Abadie; Mehdi Mirza; Bing Xu; David Warde-Farley; Sherjil Ozair; Aaron Courville; Yoshua Bengio", "journal": "Communications of the ACM", "ref_id": "b12", "title": "Generative adversarial networks", "year": "2020" }, { "authors": "Yuwei Guo; Ceyuan Yang; Anyi Rao; Yaohui Wang; Yu Qiao; Dahua Lin; Bo Dai", "journal": "", "ref_id": "b13", "title": "Animatediff: Animate your personalized text-to-image diffusion models without specific tuning", "year": "2023" }, { "authors": "Yaru Hao; Zewen Chi; Li Dong; Furu Wei", "journal": "", "ref_id": "b14", "title": "Optimizing prompts for text-to-image generation", "year": "2022" }, { "authors": "Jonathan Ho; Tim Salimans", "journal": "", "ref_id": "b15", "title": "Classifier-free diffusion guidance", "year": "2022" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "", "ref_id": "b16", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "J Edward; Yelong Hu; Phillip Shen; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen", "journal": "", "ref_id": "b17", "title": "Lora: Low-rank adaptation of large language models", "year": "2021" }, { "authors": "Yushi Hu; Benlin Liu; Jungo Kasai; Yizhong Wang; Mari Ostendorf; Ranjay Krishna; Noah A Smith", "journal": "", "ref_id": "b18", "title": "Tifa: Accurate and interpretable text-to-image faithfulness evaluation with question answering", "year": "2023" }, { "authors": "Tero Karras; Samuli Laine; Timo Aila", "journal": "", "ref_id": "b19", "title": "A style-based generator architecture for generative adversarial networks", "year": "2019" }, { "authors": "Levon Khachatryan; Andranik Movsisyan; Vahram Tadevosyan; Roberto Henschel; Zhangyang Wang; Shant Navasardyan; Humphrey Shi", "journal": "", "ref_id": "b20", "title": "Text2video-zero: Text-toimage diffusion models are zero-shot video generators", "year": "2023" }, { "authors": "Yuval Kirstain; Adam Polyak; Uriel Singer; Shahbuland Matiana; Joe Penna; Omer Levy", "journal": "", "ref_id": "b21", "title": "Pick-a-pic: An open dataset of user preferences for text-to-image generation", "year": "2023" }, { "authors": "Kimin Lee; Hao Liu; Moonkyung Ryu; Olivia Watkins; Yuqing Du; Craig Boutilier; Pieter Abbeel; Mohammad Ghavamzadeh; Shixiang Shane Gu", "journal": "", "ref_id": "b22", "title": "Aligning textto-image models using human feedback", "year": "2023" }, { "authors": "Chunyi Li; Zicheng Zhang; Haoning Wu; Wei Sun; Xiongkuo Min; Xiaohong Liu; Guangtao Zhai; Weisi Lin", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b23", "title": "Agiqa-3k: An open database for ai-generated image quality assessment", "year": "2023" }, { "authors": "Rosanne Liu; Dan Garrette; Chitwan Saharia; William Chan; Adam Roberts; Sharan Narang; Irina Blok; Mohammad Mical; Noah Norouzi; Constant", "journal": "", "ref_id": "b24", "title": "Character-aware models improve visual text rendering", "year": "2022" }, { "authors": "Ruoshi Liu; Rundi Wu; Basile Van Hoorick; Pavel Tokmakov; Sergey Zakharov; Carl Vondrick", "journal": "", "ref_id": "b25", "title": "Zero-1-to-3: Zero-shot one image to 3d object", "year": "2023" }, { "authors": "Chong Mou; Xintao Wang; Liangbin Xie; Yanze Wu; Jian Zhang; Zhongang Qi; Ying Shan; Xiaohu Qie", "journal": "", "ref_id": "b26", "title": "T2iadapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models", "year": "2023" }, { "authors": "Long Ouyang; Jeffrey Wu; Xu Jiang; Diogo Almeida; Carroll Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray", "journal": "", "ref_id": "b27", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Ben Poole; Ajay Jain; Jonathan T Barron; Ben Mildenhall", "journal": "", "ref_id": "b28", "title": "Dreamfusion: Text-to-3d using 2d diffusion", "year": "2022" }, { "authors": "Mihir Prabhudesai; Anirudh Goyal; Deepak Pathak; Katerina Fragkiadaki", "journal": "", "ref_id": "b29", "title": "Aligning text-to-image diffusion models with reward backpropagation", "year": "2023" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "", "ref_id": "b30", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Aditya Ramesh; Mikhail Pavlov; Gabriel Goh; Scott Gray; Chelsea Voss; Alec Radford; Mark Chen; Ilya Sutskever", "journal": "", "ref_id": "b31", "title": "Zero-shot text-to-image generation", "year": "2021" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b32", "title": "Hierarchical text-conditional image generation with clip latents", "year": "2022" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b33", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Nataniel Ruiz; Yuanzhen Li; Varun Jampani; Yael Pritch; Michael Rubinstein; Kfir Aberman", "journal": "", "ref_id": "b34", "title": "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation", "year": "2023" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily Denton; Seyed Kamyar; Seyed Ghasemipour; Burcu Karagol Ayan; Sara Mahdavi; Rapha Gontijo Lopes", "journal": "", "ref_id": "b35", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Christoph Schuhmann; Richard Vencu; Romain Beaumont; Robert Kaczmarczyk; Clayton Mullis; Aarush Katta; Theo Coombes; Jenia Jitsev; Aran Komatsuzaki", "journal": "", "ref_id": "b36", "title": "Laion-400m: Open dataset of clip-filtered 400 million image-text pairs", "year": "2021" }, { "authors": "Christoph Schuhmann; Romain Beaumont; Richard Vencu; Cade Gordon; Ross Wightman; Mehdi Cherti; Theo Coombes; Aarush Katta; Clayton Mullis; Mitchell Wortsman", "journal": "", "ref_id": "b37", "title": "Laion-5b: An open large-scale dataset for training next generation image-text models", "year": "" }, { "authors": "John Schulman; Filip Wolski; Prafulla Dhariwal; Alec Radford; Oleg Klimov", "journal": "", "ref_id": "b38", "title": "Proximal policy optimization algorithms", "year": "2017" }, { "authors": "Jascha Sohl-Dickstein; Eric Weiss; Niru Maheswaranathan; Surya Ganguli", "journal": "", "ref_id": "b39", "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "year": "2015" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "", "ref_id": "b40", "title": "Denoising diffusion implicit models", "year": "2020" }, { "authors": "Yang Song; Stefano Ermon", "journal": "", "ref_id": "b41", "title": "Improved techniques for training score-based generative models", "year": "2020" }, { "authors": "Yang Song; Conor Durkan; Iain Murray; Stefano Ermon", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b42", "title": "Maximum likelihood training of score-based diffusion models", "year": "2021" }, { "authors": "Shaolin Su; Hanhe Lin; Vlad Hosu; Oliver Wiedemann; Jinqiu Sun; Yu Zhu; Hantao Liu; Yanning Zhang; Dietmar Saupe", "journal": "IEEE Transactions on Multimedia", "ref_id": "b43", "title": "Going the extra mile in face image quality assessment: A novel database and model", "year": "2023" }, { "authors": "J Zijie; Evan Wang; David Montoya; Haoyang Munechika; Benjamin Yang; Duen Hoover; Chau Horng", "journal": "", "ref_id": "b44", "title": "Diffusiondb: A large-scale prompt gallery dataset for text-toimage generative models", "year": "2022" }, { "authors": "Sam Witteveen; Martin Andrews", "journal": "", "ref_id": "b45", "title": "Investigating prompt engineering in diffusion models", "year": "2022" }, { "authors": "Haoning Wu; Zicheng Zhang; Erli Zhang; Chaofeng Chen; Liang Liao; Annan Wang; Chunyi Li; Wenxiu Sun; Qiong Yan; Guangtao Zhai", "journal": "", "ref_id": "b46", "title": "Q-bench: A benchmark for general-purpose foundation models on low-level vision", "year": "2023" }, { "authors": "Haoning Wu; Zicheng Zhang; Erli Zhang; Chaofeng Chen; Liang Liao; Annan Wang; Kaixin Xu; Chunyi Li; Jingwen Hou; Guangtao Zhai; Geng Xue; Wenxiu Sun; Qiong Yan; Weisi Lin", "journal": "", "ref_id": "b47", "title": "Q-instruct: Improving low-level visual abilities for multi-modality foundation models", "year": "2023" }, { "authors": "Jay Zhangjie Wu; Yixiao Ge; Xintao Wang; Stan Weixian Lei; Yuchao Gu; Yufei Shi; Wynne Hsu; Ying Shan; Xiaohu Qie; Mike Zheng Shou", "journal": "", "ref_id": "b48", "title": "Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation", "year": "2023" }, { "authors": "Xiaoshi Wu; Keqiang Sun; Feng Zhu; Rui Zhao; Hongsheng Li", "journal": "ICCV", "ref_id": "b49", "title": "Better aligning text-to-image models with human preference", "year": "2023" }, { "authors": "Jiazheng Xu; Xiao Liu; Yuchen Wu; Yuxuan Tong; Qinkai Li; Ming Ding; Jie Tang; Yuxiao Dong", "journal": "NeurIPS", "ref_id": "b50", "title": "Imagereward: Learning and evaluating human preferences for textto-image generation", "year": "2023" }, { "authors": "Lvmin Zhang; Anyi Rao; Maneesh Agrawala", "journal": "", "ref_id": "b51", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 75.57, 573.91, 206.92, 9.68 ], "formula_id": "formula_0", "formula_text": "p θ (x t-1 |x t ) = N (x t-1 ; µ θ (x t , t), Σ θ (x t , t)), (1" }, { "formula_coordinates": [ 3, 282.49, 574.26, 3.87, 8.64 ], "formula_id": "formula_1", "formula_text": ")" }, { "formula_coordinates": [ 3, 76.54, 646.75, 209.83, 12.69 ], "formula_id": "formula_2", "formula_text": "L(θ) = E xt,s,t,ϵ∼N (0,I) ∥ϵ -ϵ θ (x t , t, z)∥ 2 2 ,(2)" }, { "formula_coordinates": [ 3, 383.63, 459.77, 161.48, 9.65 ], "formula_id": "formula_3", "formula_text": "J(ϕ) = E [R(x 0 , s)] .(3)" }, { "formula_coordinates": [ 3, 313.99, 538.22, 231.12, 30.2 ], "formula_id": "formula_4", "formula_text": "∇ ϕ J = E T t=0 ∇ ϕ log p θ (x t-1 |x t , τ ϕ (s))R(x 0 , s) . (4)" }, { "formula_coordinates": [ 3, 319.87, 635.24, 225.24, 9.65 ], "formula_id": "formula_5", "formula_text": "J = E [min(r t (ϕ)A, clip(r t (ϕ), 1 -λ, 1 + λ)A)] , (5)" }, { "formula_coordinates": [ 4, 56.89, 324.42, 229.47, 10.63 ], "formula_id": "formula_6", "formula_text": "E z∼q ϕ (z|s) [log(p θ (x 0:T |z))] -D KL (q ϕ (z|s)||p(z)). (6)" }, { "formula_coordinates": [ 4, 60.63, 506.31, 225.73, 23.55 ], "formula_id": "formula_7", "formula_text": "∇ θ L = n m ∂R ∂xt ∂xt ∂θ , where m ≤ n ∈ [0, T ]." } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "4D occupancy forecasting is a new challenge introduced in CVPR 2023 WAD. The task is to understand how an environment evolves with time which is crucial for motion planning in autonomous driving. However, conventional methods require costly human annotations, such as detection bounding boxes, tracking ids, or semantic segmentation, which make it difficult to scale up to a large labeled dataset. This challenge aims to learn future occupancy forecasting from unlabeled datasets.\nIn this challenge, given a particular agent's observation of the world in the past n seconds, we need to predict the space-time evolution of the world in the future n seconds. Specifically, the occupancy state of 5 frames in the next 3s is predicted by observing the point cloud of 5 frames in the past and present timestamp within 3s (at a frequency of 5/3Hz). The future frame point cloud is obtained by rendering the occupancy state from a given query ray. Then, all point clouds and occupancies are aligned to the current frame under the LIDAR coordinate system.\nOur solution adopts a voxel feature encoder to trans- " }, { "figure_ref": [ "fig_0" ], "heading": "Method", "publication_ref": [ "b1" ], "table_ref": [], "text": "We employ the bev feature as a unified representation and adopted the OccFormer module as an occupancy head for 4d occupancy forecasting based on UniAD [2]. Occ-Former module consists of a transformer decoder with T sequential blocks. The pipeline of our method is shown in Figure 1." }, { "figure_ref": [], "heading": "BEV feature", "publication_ref": [], "table_ref": [], "text": "We employ the LIDAR BEV encoder based on SEC-OND [6] to generate the LIDAR BEV feature B l . The LI-DAR encoder takes as input the current and past T = 5 frames of the point cloud with a time step of 0.6 s. All point clouds have been aligned to the current frame. Afterward, we fuse the past T frames with the current frame to aggregate temporal features by a 2D convolutional block. The spatial-temporal fused BEV feature map is fed to the occupancy decoder." }, { "figure_ref": [], "heading": "OccFormer", "publication_ref": [], "table_ref": [], "text": "Given the BEV feature from upstream modules, we feed it to OccFormer to get the multi-time-step occupancy results. OccFormer consists of T sequential blocks, where T = 5 is a number of future frames. Each block is responsible for generating the occupancy of a particular frame. Unlike the instance occupancy output by OccFormer in UniAD, we need dense voxel-wise occupancy.\nIn each sequential block, the BEV features first perform self-queries through the self-attentive layer to compute the similarity metric within the feature. Then, we randomly initialize the instance query Q I , which are track query Q A , agent position P A and motion query Q X . The instance arXiv:2311.15660v1 [cs.CV] 27 Nov 2023 Secondly, the historical frame BEV feature maps are fused with the current frame by a temporal encoder. Thirdly, Spatial-temporal fused BEV is fed into OccFormer which generates voxel-wise occupancy forecasting results. Fourthly, UNet works as a second-stage decoder to refine the forecasts. Last, voxel rendering generates point-wise depth along the given ray direction as post-processing. query Q I and BEV feature interact using cross-attention so that the dense scene feature and sparse agent feature benefit from each other. The interacted BEV feature is sent to the next sequential block, which is cycled in RNN fashion.\nAll T frame BEV features are upsampled to obtain future occupancy O T . Dimensions of all dense features and instance features are 256 in OccFormer. A UNet head is used as a second-stage decoder to enhance the multiscale forecasting results. After generating voxel-wise occupancy forecasting, voxel rendering is then performed by the query rays to get the point-wise estimated depth." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [], "table_ref": [], "text": "The competition used the Argoverse 2 Sensor Dataset [5], which consisted of 1000 scenes (750 for training, 150 for validation, and 150 for testing) with a total of 4.2 hours of driving data. The total dataset is extracted in the form of 1 TB of data. Each vehicle log has a duration of approximately 15 seconds and includes an average of approximately 150 LiDAR scans with 10 FPS LiDAR frames." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b2" ], "table_ref": [], "text": "The metric is performed in the range of [-70m, -70m, -4.5m, 70m, 70m, 4.5m] with a voxel resolution of 0.2m. The absolute L1 distance between the true expected depth along a given ray direction and the predicted expected depth obtained by rendering along the same ray direction is used as the main metric. Absolute relative error (AbsRel), nearfield chamfer distance (NFCD), and vanilla chamfer dis-tance (CD) are measured together as other metrics. More details of the evaluation can be found in the paper [3]." }, { "figure_ref": [], "heading": "Implementation details", "publication_ref": [ "b2" ], "table_ref": [ "tab_1" ], "text": "We followed the baseline [3] for data preparation for training and evaluation. Point shuffle is used for data augmentation in the training stage. The voxel size is set to (0.075m, 0.075m, 0.2m) and the resulting BEV feature B l has a shape of 240*240. We train the model with a cosine annealing policy with a 0.001 learning rate and use L1 loss and Adam optimizer. We train the model from scratch for 20 epochs on 8 V100 GPUs with a total batch size of 8.\nWe submitted our results to the testing server and got a 3.57 L1 Error. The results are shown in Table 1. Our results outperform the baseline with 18% and 15% improvements in L1 Error and Absolute Relative L1 Error, respectively." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b2" ], "table_ref": [], "text": "In our model, we employ a LIDAR encoder to encode spatial features and then do a temporal fusion of historical BEV features. We use the BEV feature as a unified intermediate representation. We employ an OccFormer as 4D occupancy prediction head to loop out future occupancy in RNN style. Following the work of [3], we \"render\" point cloud data from 4D occupancy predictions and estimate depth for supervision. The experimental results indicate that our model achieved better scores than the baseline with 18% and 15% improvements in L1 Error and Absolute Relative L1 Error, respectively." } ]
This report presents our Le3DE2E Occ solution for 4D Occupancy Forecasting in Argoverse Challenges at CVPR 2023 Workshop on Autonomous Driving (WAD). Our solution consists of a strong LiDAR-based Bird's Eye View (BEV) encoder with temporal fusion and a two-stage decoder, which combines a DETR head and a UNet decoder. The solution was tested on the Argoverse 2 sensor dataset to evaluate the occupancy state 3 seconds in the future. Our solution achieved 18% lower L1 Error (3.57) than the baseline on the 4D Occupancy Forecasting task in Argoverse Challenges at CVPR 2023.
Technical Report for Argoverse Challenges on 4D Occupancy Forecasting
[ { "figure_caption": "Figure 1 .1Figure 1. System overview. Firstly the LiDAR point clouds of the current frame are voxelized and encoded to the BEV feature map.Secondly, the historical frame BEV feature maps are fused with the current frame by a temporal encoder. Thirdly, Spatial-temporal fused BEV is fed into OccFormer which generates voxel-wise occupancy forecasting results. Fourthly, UNet works as a second-stage decoder to refine the forecasts. Last, voxel rendering generates point-wise depth along the given ray direction as post-processing.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "4D Occupancy Forecasting on the official leaderboard", "figure_data": "TeamL1 (↓) AbsRel NFCDCDHost 34336 Team (Raytracing)6.680.488.7949.88Host 34336 Team (Point Cloud Forecasting as a P)4.340.265.2392.08Le3DE2E Occ (Ours)3.570.223.3591.61", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" } ]
Pengfei Zheng; Kanokphan Lertniphonphan; Feng Chen; Siwei Chen; Bingchuan Sun; Jun Xie; Zhepeng Wang
[ { "authors": "Nicolas Carion; Francisco Massa; Gabriel Synnaeve; Nicolas Usunier; Alexander Kirillov; Sergey Zagoruyko", "journal": "Springer International Publishing", "ref_id": "b0", "title": "End-toend object detection with transformers", "year": "2020" }, { "authors": "Yihan Hu; Jiazhi Yang; Li Chen; Keyu Li; Chonghao Sima; Xizhou Zhu; Siqi Chai; Senyao Du; Tianwei Lin; Wenhai Wang; Lewei Lu; Xiaosong Jia; Qiang Liu; Jifeng Dai; Yu Qiao; Hongyang Li", "journal": "", "ref_id": "b1", "title": "Planning-oriented autonomous driving", "year": "2023" }, { "authors": "Tarasha Khurana; Peiyun Hu; David Held; Deva Ramanan", "journal": "", "ref_id": "b2", "title": "Point cloud forecasting as a proxy for 4d occupancy forecasting", "year": "2023" }, { "authors": "Olaf Ronneberger; Philipp Fischer; Thomas Brox", "journal": "", "ref_id": "b3", "title": "U-net: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "Benjamin Wilson; William Qi; Tanmay Agarwal; John Lambert; Jagjeet Singh; Siddhesh Khandelwal; Ratnesh Bowen Pan; Andrew Kumar; Jhony Hartnett; Deva Kaesemodel Pontes; Peter Ramanan; James Carr; Hays", "journal": "", "ref_id": "b4", "title": "Argoverse 2: Next generation datasets for self-driving perception and forecasting", "year": "2021" }, { "authors": "Yan Yan; Yuxing Mao; Bo Li", "journal": "Sensors", "ref_id": "b5", "title": "Second: Sparsely embedded convolutional detection", "year": "2018" } ]
[]
2023-11-27
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b11", "b10", "b20", "b20", "b21", "b14", "b19", "b33", "b12", "b24", "b3", "b4", "b5", "b30" ], "table_ref": [], "text": "Non-rigid shape matching is the problem of finding correspondences between two geometric objects which differ in their pose or shape. It provides deformation or transfer functions between shapes, including motion fields when considering a single object in motion. Shape matching is a critical part of many applications in computer vision and computer graphics. Recently, data-driven approaches that learn plausible correspondences from training datasets have shown significant improvements in matching shapes with complex deformations, for instance humans with clothing. Yet, they often require ground truth correspondences for training. We consider the unsupervised case where no ground truth is provided.\nFigure 1: Overview of input, output and our results compared to Deep Shells [12], Neuromorph [11] and AttentiveFMaps [21] on challenging data. Each point on target (left) is assigned a color, which is transferred to the source (right) using correspondences computed by different methods. Our results are both globally and locally correct.\nShape matching has been studied extensively over the past decades. Recent data-driven strategies include matching shapes in a spectral basis or adversely in the spatial domain. Spectral methods e.g. [21,22] find global maps between real-valued functions defined in spectral shape spaces, allowing them to generalize to different deformations and sampling rates. However, correct shape topology is assumed for spectral decompositions. Spatial methods learn the alignment of shapes e.g. [15,20,34] or consider the registration to a given template by casting the problem as a vertex classification e.g. [13,25]. They are robust to changes in topology, but exhibit low generalisation abilities to deformations unseen during training due to the unstructured feature space used for matching.\nThis demonstrates that matching two shapes robustly remains difficult, especially when considering raw 3D scans. In addition to large non-rigid deformations and different sampling distributions of the shapes, raw 3D scans suffer from sensor noises including geometric noise of the vertex positions, and topological noise caused by the coalescence of spatially close surface regions. These noises distort both the extrinsic and intrinsic geometry of the shape that is being captured. Matching raw 3D scans remains a challenging problem.\nWe design a robust matching technique that generalizes well and is robust to noise including topological noise present in raw 3D scans. This is achieved by a spatial matching method that retains the robustness of spatial approaches while producing correspondence maps that, as in the case of spectral approaches, capture global geometric shape properties. This is done by relying on two strategies. First, our method considers a hierarchical approach that builds correspondences at different scales of the shapes. We associate shape elements in a coarse-to-fine approach going from coarse surface patches to vertices, which has two main advantages. First, it significantly increases robustness to noise. Second, it reduces the dimensionality of the problem, which allows for efficient unsupervised learning. At coarse scales, we can represent the matching between two shapes as a small matrix, on which we can efficiently impose desirable properties such as cycle consistency or minimal distortion. This information can subsequently be leveraged efficiently at finer scales.\nSecond, our method enforces correspondences to agree with a deformation model that controls the spatial continuity of the produced matching. We choose a piece-wise near-rigid deformation model that can represent complex non-rigid deformations, and successfully models deformations of humans, possibly in clothing, and other vertebrates. Imposing spatial continuity on our hierarchical maps constrains correspondences as a whole by linking the matching of individual shape elements, thus allowing to capture global shape properties.\nTo combine these two strategies, an association network estimating the matching between two shapes is combined with a deformation network estimating the induced alignment in 3D. Both networks operate in a hierarchical fashion, which allows for unsupervised training. Although multi-scale matching and deformation guidance were used in optimisation based matching approaches e.g. [4,5,6,31], the novelty of our work is the design of a data-driven method that combines these two strategies in an unsupervised fashion to build a structured feature space where shapes can be embedded and matched with robustness to noise and generalisation to different deformations and sampling rates.\nWe demonstrate experimentally that our method performs on-par with state-of-the-art for standard test scenarios. Furthermore, our approach significantly outperforms existing methods when matching raw 3D scans after having been trained exclusively on clean data. Figure 1 shows this for challenging data with topological noise (top row).\nIn summary, our main contributions are as follows. Firstly, we propose a novel spatial unsupervised data-driven non-rigid shape matching approach that combines multi-scale association maps with a piecewise near-rigid deformation model. Secondly, we outperform state-of-the-art on matching raw 3D scans captured using a multi-camera platform." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b6", "b7", "b14", "b15", "b33", "b19", "b10", "b26", "b21", "b13", "b16", "b28", "b29", "b8", "b20", "b11", "b9", "b19", "b33" ], "table_ref": [], "text": "Non-rigid shape matching has been studied extensively during the past decades. We focus on unsupervised learning-based approaches as our method falls into this category. For a deeper review we refer to a recent survey [7]. Existing works can be roughly categorised into two main classes : spatial methods and spectral methods.\nSpatial Methods Early spatial methods [8,15] proposed supervised strategies to extract a matching between point clouds by learning the deformation in 3D space that best aligns the shapes to a common template. Groueix et al. [16] use an unsupervised network for point cloud matching by template-free deformation, where cycle consistency of the deformation is used as a supervising signal to define good correspondences. Recent methods embed input shapes in a feature space where the shapes are aligned guided by criteria on the alignment in 3D. CorrNet3D [34] and DPC [20] estimate an association in feature space between two point clouds. CorrNet3D guides this association by the reconstruction of the two point clouds, while DPC uses self-and cross-reconstruction of the two point clouds for this purpose. Closely related in spirit to our method is NeuroMorph [11], an unsupervised mesh matching method that simultaneously learns associations in feature space and an interpolation between two meshes in 3D. The interpolation is used as a criterion to guide the correspondence search in feature space. In our method, the deformation search is used to guide the global correspondence search in addition to the hierarchical modeling of the matching.\nSpectral Methods Functional maps (FM) [27] introduced the idea of considering pointto-point matching as a special case of mappings between functions defined on shapes, and forms the basis of many matching algorithms. Functions defined on points are projected onto the eigenfunctions of the Laplace-Beltrami operator of the shapes, where the map-ping between functions is estimated. This mapping is then used to extract a point-to-point matching. Deep functional maps were introduced in Litany et al. [22] where instead of using pre-defined point descriptors as the functions to match, the output of a neural network is used. Deep functional maps were successfully extended to the unsupervised regime, by minimizing a distortion measure of the extracted point map [14,17], or by exploiting the desired structural properties of the maps directly in the spectral domain [29,30]. DUO-FM [9] proposes to incorporate the complex functional maps in the representation to estimate orientation preserving maps. More recently, AttentiveFMaps [21] propose a spectral attention framework to combine multiple resolution functional maps producing maps that are particularly robust at handling non-isometric shapes.\nOther spectral approaches combine the alignment in a spectral shape space with an alignment in ambient space. DeepShells [12] is a functional map based method that also aligns shapes in 3D. The learning criterion of the map measures the alignment tightness between the deformed source and the target shape in ambient space. Spectral Teacher Spatial Student (STS) [10] proposes a student teacher mechanism where one of the spatial methods DPC [20] or CorrNet3D [34] is used as a student network to embed shapes in a feature space where the point-to-point mapping is estimated via feature similarity. These features are then used as functions to match in a teacher standard functional maps mechanism. STS uses the spectral teacher to build a feature space that captures global shape properties. We use the deformation model constraint at multiple scales to achieve the same goal." }, { "figure_ref": [], "heading": "Method", "publication_ref": [ "b27" ], "table_ref": [], "text": "This section details our unsupervised1 data-driven matching approach. Given as input two 3D meshes X and Y, our method outputs a hierarchical mapping between X and Y as well as the deformation in 3D that this mapping induces.\nOur method represents X and Y hierarchically with a hierarchy of surface patches using a greedy approach based on furthest point sampling inspired by [28]. We select L + 1 patch resolutions where patch 0 represents the vertex level (where each patch is restricted to the vertex itself) and L represents the coarsest patch level. In the following we denote the patches as (P l i ) 1≤i≤n l for l = 0, . . . , L and their centers as C l = (c l i ∈ R 3 ) 1≤i≤n l . Our method is implemented by a composition of two networks. First, an association network that aligns X and Y in a hierarchical feature space as detailed in Sec. 3.1. Second, a deformation network that constrains these hierarchical maps by aligning X and Y in 3D space by fitting a hierarchical patch-wise near-rigid deformation model as detailed in Sec. 3.2. Fig. 2 shows the architecture of our network." }, { "figure_ref": [], "heading": "Association Network", "publication_ref": [ "b31", "b10", "b19" ], "table_ref": [], "text": "This network estimates association maps hierarchically, from coarse patches to vertices. Association maps are computed as matrices Π l X →Y and Π l Y→X mapping from surface patches of X to surface patches of Y and vice versa. Element (i, j) of Π l X →Y contains a matching score between the i-th patch of X and the j-th patch of Y at hierarchy level l, estimated via feature similarity. The shape features are extracted by a convolutional graph neural network, based on the FeaStConv [32] operator. At hierarchy level l, FeaStConv acts Figure 2: Network architecture. The network takes as input X , Y decomposed into a hierarchy of surface patches. It is decomposed into two networks that work from coarse to fine levels, sequentially per level. An association network extracts coarse-to-fine correspondences between X and Y as inter-patch association matrices (from Π L to Π 0 ). A deformation network outputs deformations that respect these associations in 3D (from X L , Y L to X 0 , Y 0 ). on patch neighborhoods. On the vertex scale (hierarchy level 0), where convolutions act on vertex neighborhoods, we use vertex coordinates and normals as input features. We use patch-wise max-pooling to go to a coarser hierarchy level. This allows to compute patchwise features for hierarchy level l on X and Y, which we denote by\nX l = (x l i ∈ R d l ) 1≤i≤n l and Y l = (y l i ∈ R d l ) 1≤i≤m l ,\nwhere n l , m l are the number of patches at level l for X and Y. Features are computed in a fine-to-coarse manner. To allow for coarse-to-fine hierarchical matching, we combine the feature of a patch at level l with the features of all its parent patches in coarser levels l + 1, . . . , L. We denote these combined features by\nX L,l = (x L,l i ∈ R d l ) 1≤i≤n l .\nAt the coarsest level L, we set X L,L = X L . At level l, we unpool X L,l+1 to level l. This is done by unpooling to the vertex level, and employing maxpooling to all vertices of (P l i ), leading to features Unpool l+1,l (X L,l+1 ). We concatenate Unpool l+1,l (X L,l+1 ) and X l . This concatenated feature may be non-smooth on the surface along patch boundaries in hierarchy level l + 1. To remedy this, we perform one FeaStConv convolution to get X L,l . The same computations yield Y L,l = (y L,l i ∈ R d l ) 1≤i≤m l . The association matrices between patches of X and Y at hierarchy level l are computed using cosine similarity as a distance measure in feature space, similar to previous works [11,20]. We then employ softmax to get normalized similarity scores:\n(Π l X →Y ) i j := exp(s l i j ) ∑ m l k=1 exp(s l ik ) (1) (Π l Y→X ) i j := exp(s l ji ) ∑ n l k=1 exp(s l ki )(2)\nwith\ns l i j := ⟨x L,l i , y L,l j ⟩ 2 / ∥x L,l i ∥ 2 ∥y L,l j ∥ 2 ." }, { "figure_ref": [], "heading": "Deformation Network", "publication_ref": [ "b4", "b34" ], "table_ref": [], "text": "The deformation network deforms X to Y and Y to X while respecting association matrices Π l X →Y and Π l Y→X , respectively, for every level l of the hierarchy. Deformation Model We use a patch-based deformation model that represents a shape deformation as a collection of patch-wise near-rigid deformations [5]. It allows to represent complex deformations while drastically reducing the deformation parameters. We summarize how to deform hierarchy level l and omit subscript l to simplify notation. Given X with its decomposition into surface patches (P i ) 1≤i≤n along with their centers C = (c i ∈ R 3 ) 1≤i≤n , each patch P i is associated with a rotation matrix R i ∈ R 3×3 and a new center position u i ∈ R 3 . If we denote by x 0 (v) ∈ R 3 the original position of vertex v in X , the rigid deformation of a vertex v according to P i can be written as 3 . The rigid deformations of all patches (P i ) 1≤i≤n are blended at the vertex level using weighting functions α i (v) ∈ R defined as a Gaussian of the geodesic distance2 of v to c i .\nx i (v) = R i (x 0 (v) -c i ) + u i ∈ R\nTo ensure consistent deformations between neighboring patches P i and P j ∈ N (P i ), transformations are constrained by minimizing\nl rig (X ) = ∑ (P i ) 1≤i≤n ∑ P j ∈N (P i ) ∑ v∈P i P j E i j v with(3)\nE i j v∈P i P j = (α i (v) + α j (v)) × ∥x i (v) -x j (v)∥ 2 2 . (4\n)\nArchitecture Our network deforms both X to Y and Y to X symmetrically. We detail X to Y in what follows. At level l of the hierarchy, we know matrix Π l X →Y from the association network. The deformation network consists of a feature extractor and a deformation decoder. The feature extractor is a convolutional neural network identical to the one used in the association network. It extracts patch-wise features for X and Y at level l called Xl = (x l i ∈ R d l ) 1≤i≤n l and Ỹl = (ỹ l i ∈ R d l ) 1≤i≤m l . We choose to decouple the features used for association and deformation to allow the networks to learn optimal features independently for each task.\nThe deformation decoder consists of a graph convolutional network followed by an MLP. It outputs rotation parameters (R i ∈ R 6 ) 1≤i≤n l and new center positions (u i ∈ R 3 ) 1≤i≤n l for every patch of level l of X . It takes as input patch centers C X l ∈ R n l ×3 , patch-wise features Xl ∈ R n l ×d l , patch centers of X projected in Y using the association matrix Π l X →Y C Y l ∈ R n l ×3 , which represent spatial targets for the patch centers, and patch-wise features of X projected in patch-wise features of Y using the association matrix Π l X →Y Ỹl ∈ R n l ×d l . We use the 6D representation of rotation matrices introduced in [35] to allow for efficient learning. Applying the resulting transformations leads to the deformed shape X l ." }, { "figure_ref": [], "heading": "Learning Criteria", "publication_ref": [], "table_ref": [], "text": "We train our model using five self-supervised criteria at each hierarchy level l\nl network = L ∑ l=0 (λ l g l l geodesic + λ l c l l cycle + λ l r l l rec + λ l m l l match + λ l ri l l rigidity ),(5)\nwhere the individual loss terms for each level l l geodesic , l l cycle , l l rec , l l match , l l rigidity are weighted by corresponding weights λ l g , λ l c , λ l r , λ l m , λ l ri and detailed below." }, { "figure_ref": [], "heading": "Geodesic Distance Distortion Criterion", "publication_ref": [ "b10", "b16" ], "table_ref": [], "text": "The first criterion favours maps that preserve geodesic distances, and is commonly used for shape matching e.g. [11,17]. We implement this by minimizing\nl l geodesic = ∥Π l X →Y D l Y Π l X →Y T -D l X ∥ 2 2 + ∥Π l Y→X D l X Π l Y→X T -D l Y ∥ 2 2\n, where D l X and D l Y are the geodesic distance matrices of X and Y, restricted to the patch centers of level l. This criterion measures the geodesic distortion of maps Π l X →Y and Π l Y→X . We only use this loss on coarse levels because it is computationally prohibitive for finer levels.\nCycle Consistency Criterion This criterion asks Π l X →Y and Π l Y→X to be cycle consistent, i.e. every point going through a length two cycle is mapped back to itself. It is implemented by minimizing\nl l cycle = ∥Π l X →Y (Π l Y→X C X l ) -C X l ∥ 2 2 + ∥Π l Y→X (Π l X →Y C Y l ) - C Y l ∥ 2 2 .\nSelf Reconstruction Criterion The third criterion aims to identify each patch in feature space, to avoid many patches being mapped to the same patch. This criterion helps handling intrinsically symmetric shapes, where different patches share the same intrinsic geometry. It is implemented by minimizing\nl l rec = ∥Π l X →X C X l -C X l ∥ 2 2 + ∥Π l Y→Y C Y l -C Y l ∥ 2 2\n, where Π l X →X and Π l Y→Y are self association matrices computed similar to Eq. 1 and 2. Matching Criterion The matching criterion aims to deform one shape to the other using the computed associations. It serves two roles. First, ensuring that the deformation realised by the deformation network matches the correspondences computed by the association network. Second, allowing for extrinsic geometric control on the correspondences. It is implemented by minimizing l l match = ∥C\nX l l -Π l X →Y C Y l ∥ 2 2 + ∥C Y l l -Π l Y→X C X l ∥ 2 2\n, where C X l l and C Y l l are the deformed cluster centers of X l and Y l respectively. Rigidity Criterion The rigidity criterion aims for consistent transformations between neighboring patches at all hierarchy levels as l l rigidity = l rig (X l ) + l rig (Y l ), with l rig defined in Eq. 3. The matching and rigidity criteria encourage the network to explain the similarity in feature space on the deformed shapes in 3D at every level of the patch hierarchy. At every level of the hierarchy, the deformation for every vertex on the shapes is evaluated even though the matching criterion evaluates the deformation result only on the patch centers of the deformed shapes. This means that even if the association matrices are sparse at coarse levels they consider the full correspondence between the two shapes. This acts as a strong inductive bias in our network, which promotes spatially continuous maps.\nFine Tuning Our full network is trained in an end-to-end fashion. After training, the network produces good quality matchings in a single forward pass. We improve these matchings by optimizing the network for each pair X and Y only, with the same architecture and hyper-parameters. This is possible as all of our losses are unsupervised and do not require any ground truth information. We refer to this specialisation as fine tuning. We extract final dense correspondences by applying argmax on vertex associations Π 0 X →Y and Π 0 Y→X ." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b20", "b10", "b11", "b9" ], "table_ref": [], "text": "This section presents results of our method, including comparative evaluations, for human and animal shapes. For human shapes, we provide an extensive evaluation protocol to demonstrate that our method generalizes well to new body shapes and poses, sampling densities, topological noise, and raw acquisition data. In all of our experiments we fix the number of patches to 50, 200 and 800. More details are provided in the appendices;\nAppendix A provides implementation details, Appendix B provides details on data processing, Appendix C provides an ablation study proving the benefits of the main parts of our method i.e. hierarchy in feature space, use of a deformation model and fine tuning, and Appendix D provides additional quantitative and qualitative comparisons and qualitative deformation results. Evaluation metrics We consider two metrics that compare the ground truth mapping Π gt .→. with an estimated mapping Π .→. . The mean geodesic error (MGE):\nMGE(Π X →Y ) := 1 |X | ∑ x∈X d Y (Π X →Y (x), Π gt X →Y (x)),(6)\nand the cycle geodesic error (CycleGE):\nCycleGE(Π X →Y ) := 1 |X | 2 ∑ x 1 ,x 2 ∈X ×X |1 - d X (x 1 , x 2 ) d X ( x1 , x2 ) | with xi := Π Y→X (Π X →Y (x i )),(7)\nwhere d X , d Y are the geodesic distances on meshes X and Y, respectively, normalised by the square root of their area. These metrics allow to evaluate the distance between the true and estimated correspondences of a point x on Y, and the metric distortion induced by matching a pair of points using a length two cycle.\nCompeting methods We compare our results against four state-of-the-art methods. FM based AttentiveFMaps [21]. Spatial mesh matching method Neuromorph [11] that guides the correspondences search using an interpolation while we search for a deformation. Deep Shells [12] an FM method that combines a spectral alignment with a spatial alignment. Point cloud matching method STS [10] that uses an FM pipeline to extract global geometric properties, while we leverage spatial continuity at multiple scales. All methods that support supervised and unsupervised training were trained in the unsupervised regime." }, { "figure_ref": [], "heading": "Evaluation on human data", "publication_ref": [ "b1", "b2", "b32", "b25", "b17", "b23", "b0", "b2", "b35" ], "table_ref": [], "text": "Data We train our method on clean pre-processed data, and test it both on (possibly degraded) pre-processed data and raw 3D acquisitions. The pre-processed data we use is the extended FAUST dataset [2], which extends FAUST [3] with synthetically generated body shapes and poses. Extended FAUST contains 561 naked human meshes with 21 body shapes in roughly 27 poses. We select 451 meshes for training and 110 meshes for testing, where the test set contains body shapes and poses not used for training. We re-mesh each mesh independently and uniformly to about 5k vertices using [33] to remove vertex density as a discriminative feature. We design test sets to evaluate different types of generalization.\nGeneralization to unseen body shape and pose The first test set considers pairs of models of the extended FAUST test set that differ in body shape and pose. It allows to evaluate how well methods generalize to body shapes and poses not observed during training.\nRobustness to sampling density To evaluate robustness to different sampling strategies, we consider two test sets using resampled versions of the extended FAUST test set. The first one resamples the test set uniformly to contain ≈ 15k vertices. The second one resamples the test set with curvature adapted triangles [26] to contain ≈ 5k vertices.\nGeneralization to different topology Our method is particularly robust to changes in topology. To demonstrate this, we design a test set by altering the topology of the extended FAUST test set. Extended FAUST contains many shapes with near-contacts between different body parts, and we detect these parts and replace them by gluing the parts in contact. In practice, we achieve this by detecting self-intersections, deleting mesh parts located inside the mesh, and by filling the resulting holes using Poisson surface reconstruction [18]. The final reconstructed surface contains ≈ 6k vertices.\nGeneralization to raw 3D acquisitions Our goal is to have a method that can be applied to raw scan data without pre-processing. Hence, we test our method on two test sets of raw acquisition data. All these data are plagued by acquisition noise, which alters the geometry or topology. The first one contains 17 scans (136 pairs) of minimally dressed humans of CHUM [24] of two female subjects and one male subject, all in different poses. Each mesh contains about 15k vertices. The second one contains 12 scans (66 pairs) of humans in everyday clothing of two female subjects and two male subjects of the dataset 4DHumanOutfit [1], all in different poses. These scans were captured in a multi-view acquisition platform with 68 synchronized cameras.\nComparison to state-of-the-art We first provide a quantitative evaluation with Neuromorph, AttentiveFMaps and Deep Shells, where all methods were trained on the extended FAUST training set and tested on the different test sets. Tab. 1 shows the results. We could not include STS in this comparison, the code for pre-processing new data is not provided, making training or testing on our data difficult. For all test sets, our method outperforms Neuromorph and Deep Shells. Furthermore, our method performs on-par overall with AttentiveFMaps for pre-processed data. While AttentiveFMaps performs slightly better than our method when applied to clean pre-processed data (unseen body shape and pose and different sampling density columns in Tab. 1), it degrades significantly in the presence of topology changes. We believe that this is due to the global nature of spectral shape decompositions that naturally favours clean data. Our method significantly outperforms AttentiveFMaps in this case as our method generalizes well to topological errors. Our approach builds on a more local strategy, though also considering global geometric shape properties. This results in rare failure cases with important symmetry ambiguities but provides a strongly increased robustness to noise present in real situations, with raw acquisition data for example.\nFor raw acquisition data, our method significantly outperforms all other methods, demonstrating its practical value. This is thanks to its robustness to geometric and topological noise which characterize raw 3D acquisitions. Fig. 1 (top row) visualizes an example result of a raw 3D data acquisition in everyday clothing. The target is color coded and the colors are transferred using the correspondences computed by the different methods. Our method is correct both globally and locally, while competing methods tend to fail in this case. Our method is designed for complete shape matching, thus we only report results on shape acquisitions that are complete. Our method fails locally on partial shapes.\nTo compare to STS, we run our method on their training and test sets of FAUST [3],\nwith their evaluation protocol considering point-to-point accuracy 3 and MGE 4 . While STS reports 50.5% and 9.5, our method's results are 22.2% and 1.6 for point-to-point accuracy and MGE, respectively. Hence, our method outperforms STS. Data We consider pre-processed animal models sampled from the SMAL animal model [36]. We consider 49 animal meshes re-meshed independently to about 5k vertices. We use 32 meshes for training and 17 (136 pairs) for testing." }, { "figure_ref": [], "heading": "Evaluation on animal data", "publication_ref": [], "table_ref": [], "text": "Comparison to state-of-the-art Tab. 2 shows the results for Deep Shells, Neuromorph, Atten-tiveFMaps and our method. Our method's performance is better than Deep Shells and Neuromorph, and slightly lower than AttentiveFMaps. Fig. 1 (bottom row) visualizes an example result. Our method is visually on par with AttentiveFMaps. This shows that our method successfully applies to data beyond human models." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "We have presented a data-driven unsupervised approach to solve for non-rigid shape matching. Our approach uses an association network that embeds the shapes in a feature space where hierarchical maps are extracted. It then constrains these maps by extracting the induced alignment in 3D using a deformation network that fits a piece-wise near-rigid deformation model. Our method retains the robustness of spatial methods while enforcing global geometric constraints on the associations, as with spectral methods. We demonstrate experimentally that our approach performs on-par with state-of-the-art for pre-processed data and significantly outperforms existing methods when applied directly to the raw output of multi-view 3D reconstructions.\nThis supplementary material provides implementation details of our method in Appendix A, more details on the datasets used in Appendix B, ablation studies in Appendix C, and additional quantitative and qualitative results in Appendix D." }, { "figure_ref": [], "heading": "A Implementation Details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "A.1 Patch Extraction", "publication_ref": [ "b27" ], "table_ref": [], "text": "To compute multi-resolution surface patches, we consider a greedy approach based on the furthest point sampling strategy inspired by [28]. Starting from a randomly selected vertex x 1 , we compute the geodesic distance map U x 1 to all other vertices on the mesh. Then, given that we have a set of vertices S n = {x 1 , .., x n }, and their distance map U n , we select the new vertex x n+1 to be the furthest vertex from S n . We compute U x n+1 , the distance map from x n+1 and update U n+1 = min(U n ,U x n+1 ) and add x n+1 to S n to get S n+1 . We stop the algorithm when a target number of points is reached. To get multiple patch resolutions we stop the algorithm at increasing numbers of target points.\nWe select L + 1 patch resolutions where patches at level 0 are restricted to vertices and level L represents the coarsest patch level. For each hierarchical level, the selected samples are used as patch centers C l = (c l i ∈ R 3 ) 1≤i≤n l and their corresponding Voronoi cells on the mesh as patches (P l i ) 1≤i≤n l for l = 0, . . . , L. In all of our experiments, we extract 4 patch resolutions : all the mesh vertices, 800, 200 and 50 patches. Figure 3 shows an example of these surface patches. " }, { "figure_ref": [ "fig_1" ], "heading": "A.2 Architecture Detail", "publication_ref": [ "b31" ], "table_ref": [], "text": "Pooling and Unpooling The extracted surface patches are not strictly hierarchical in the sense that vertices of a patch at level l can belong to multiple patches at coarser levels l + 1. We use these surface patches for pooling and unpooling operations in our architectures.\nTo pool features from patches (P l i ) 1≤i≤n l of level l to coarser patches (P l+1 i\n) 1≤i≤n l+1 of level l + 1 we proceed in two step:\n1. We first unpool to the vertex level (level 0) such that each vertex is associated with the features of the patch it belongs to in level l.\n2. We then employ max-pooling to go to patch level l + 1.\nTo unpool features from patches (P l+1 i ) 1≤i≤n l+1 of level l + 1 to finer patches (P l i ) 1≤i≤n l of level l we proceed similarly. We first unpool to the vertex level (level 0) such that each vertex is associated with the features of the patch it belongs to in level l + 1. We then employ max-pooling to go to patch level l.\nFeature Extractor In both the association and the deformation networks, we use identical feature extractors based on hierarchical graph convolutional network FeaStConv operators [32]. In all cases, we fixed the number of attention heads of this operator to 9. Figure 4 illustrates the feature extractor architecture. " }, { "figure_ref": [], "heading": "Graph Convolution", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Linear Layer", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "ELU Activation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Patch wise Max Pooling", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.3 Training and Inference", "publication_ref": [ "b18" ], "table_ref": [ "tab_3" ], "text": "Loss Weights The final loss per level l is the weighted combination of five self-supervised criteria, as given in Equation 5of the main paper. Table 3 shows the weights used to train the network. The geodesic criterion was not used at the vertex level (weight fixed to 0) as it involves matrix multiplications that are computationally prohibitive. The matching loss and the rigidity loss are somehow in opposition: The rigidity loss promotes isometric deformations whereas the matching loss promotes deformations that reflect the association matrix. Their weights are fixed so as to satisfy the association matrices while preserving the spatial continuity at the patch borders.\nTraining Our network is trained using Adam [19] with gradient clipping. The learning rate is fixed to 1 × 10 -3 for the first epoch, 5 × 10 -4 between the 2 nd and 10 th epochs and 2.5 × 10 -4 after the 10 th epoch. Our model takes 372 epochs to train on the Extended FAUST training set. We select the model that achieves the smallest loss on a validation set.\nInference At test time we allow the network to specialise for each new shape pair to improve the matching. This is achieved by resuming the training of the selected model on a training set restricted to the two input shapes, with the same fixed architecture, optimization technique and hyper-parameters. We refer to this specialisation as fine tuning in the main" }, { "figure_ref": [], "heading": "Graph Convolution", "publication_ref": [], "table_ref": [], "text": "Linear Layer" }, { "figure_ref": [], "heading": "ELU Activation", "publication_ref": [], "table_ref": [], "text": "Figure 7: Architecture of the deformation decoder. It is composed of a graph convolution followed by an MLP. shape, the triangle on the original mesh, with minimal distance along the vertex normal direction. As a result of the connectivity changes, such distance can be large and we discard in the evaluation vertices for which this distance is higher than 2/10 of the remeshed shape's mean edge length. This corresponds to 4.87% of the vertices for uniform remeshings to 5k vertices, 1.2% for uniform remeshing to 15k vertices and 1.01% for curvature adapted remeshings to 5k vertices. A similar strategy is used for meshes altered with topological noise. In this case, 7.69% of the vertices are discarded in the evaluation." }, { "figure_ref": [], "heading": "Raw 3D Acquisition", "publication_ref": [ "b23", "b22" ], "table_ref": [], "text": "We also experimented our matching approach with raw 3D scans, e.g. [24]. In this case, ground truth matchings obtained fitting the SMPL [23] to the scans and considering closest vertices on the template in the normal direction. Again here, vertices with distances to the template higher than 2 times the scan's mean edge length are discarded in the evaluation. This corresponds in practice to 8.04% for the naked raw acquisition dataset and 17.42% for the clothed one." }, { "figure_ref": [], "heading": "C Ablation Studies", "publication_ref": [], "table_ref": [], "text": "We present ablation studies that evaluate the respective benefits of the main components of our approach, i.e. the hierarchical modeling and the deformation model constraint. The impact of fine tuning is also given. To assess the hierarchical modeling, we use only two levels of hierarchy: the mesh vertices and 800 patches. To assess the deformation model constraint, the network is restricted to the association network, losses involving the deformations, i.e. matching and rigidity loss, are discarded. We trained the models on extended FAUST and tested on both extended FAUST and the raw 3D acquisition data with everyday clothing. Tab. 4 shows the results and the number of epochs required to train each model. The complete model achieves the best results on both pre-processed and raw acquisition data. Note that the dimensionality reduction using the hierarchical modeling of associations is essential to the unsupervised learning and allows for much faster training. Note also that the deformation model improves the quality of the matching and that the fine most challenging test dataset, our method is more accurate both when considering details (i.e. small errors) and global alignments (i.e. large errors)." }, { "figure_ref": [], "heading": "D.2 Additional Qualitative Comparisons", "publication_ref": [], "table_ref": [], "text": "Fig. 9 shows qualitative comparisons on pre-processed meshes in the first row, pre-processed meshes with topological noise in the second row (the left heel is glued to the right calf and the left arm is glued to the head on the target shape), naked raw 3D scans in the third row and clothed raw 3D scans in the fourth row. The target is color coded and the colors are transferred using the correspondences as estimated by the different methods. Ours is both locally and globally accurate in all four cases and is robust to the presence of hairs, clothes and severe topological noises (see the last row). DeepShells makes global alignment errors ( e.g. the arms are flipped in the first row, the belly area also in the other rows). Neuromorph suffers from local distortions and makes local errors ( e.g. see the right hand in all rows). AttentiveFMaps is globally and locally accurate on the pre-processed meshes in the first row but fails in the presence of topological noise which is ubiquitous in raw scans ( e.g. see the full body in the second and fourth rows and the left arm area in the third row)." }, { "figure_ref": [], "heading": "D.3 Qualitative Deformation Results", "publication_ref": [ "b11", "b10", "b20" ], "table_ref": [], "text": "Fig. 10 shows an example of deformed shapes output by our network at every hierarchical level. The top row shows the deformation in the X → Y direction and the bottom row shows the deformation in the Y → X direction. The deformed shapes at the finest level, i.e. the vertex level are close to the target deformation shapes (X 0 ≈ Y and Y 0 ≈ X ). In coarser levels, the deformation approximates the pose (global alignment) while in finer levels, where the rigidity constraint is weaker, it approximates the body shape (local alignment). This deformation is the induced alignment in 3D that guides the matching output of our method as shown in the top row of Fig. 9.\nFigure 9: Comparisons with Deep Shells [12], Neuromorph [11] and AttentiveFMaps [21] on pre-processed human meshes (first and second rows) and on raw human 3D scans (third and forth rows). Each point on the target mesh (left) is assigned a color, which is transferred to the source mesh (right) using the correspondences estimated by the different methods." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We thank Abdelmouttaleb Dakri for providing us with SMPL fittings for our experiments. This work was funded by the ANR project Human4D (ANR-19-CE23-0020)." }, { "figure_ref": [], "heading": "Deformation Features", "publication_ref": [], "table_ref": [], "text": "Deformation\npaper. In all our experiments we fix the number of epochs for fine tuning to 50." }, { "figure_ref": [], "heading": "B Data", "publication_ref": [ "b1" ], "table_ref": [], "text": "We detail below how ground truth correspondences were obtained to evaluate our approach.\nPre-processed Data For our experiments on pre-processed data, we used the extended FAUST dataset [2] which is composed of meshes with the same template connectivity. In order to remove the connectivity consistency, which can strongly bias the matching, we remeshed all shapes individually and created the 3 test sets mentioned in the main paper (sec 4.1). To get the ground truth correspondences for vertices on the remeshed shapes, we revert to the connectivity consistent meshes by searching, for each vertex on a remeshed To prove the added benefit of the Self-Reconstruction Criterion that ensures that each patch, on the shape itself, is identified to avoid many-to-one matches, we present an ablation where we both train and test the models on extended FAUST. Tab. 5 shows the results. " }, { "figure_ref": [], "heading": "D Additional Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "D.1 Additional Quantitative Evaluation", "publication_ref": [], "table_ref": [], "text": "Fig. 8 shows cumulative error plots giving the percentage of correct correspondences within a certain tolerance radius of geodesic error for the raw acquisitions in the naked regime in (a) and with everyday clothing in (b). On naked raw acquisitions, our method achieves 99% of exact matches when we tolerate errors smaller then 8.1% of the geodesic diameter, where the second best i.e. Neuromorph achieves 85.1%. On clothed raw acquisitions, which is the " } ]
We present an unsupervised data-driven approach for non-rigid shape matching. Shape matching identifies correspondences between two shapes and is a fundamental step in many computer vision and graphics applications. Our approach is designed to be particularly robust when matching shapes digitized using 3D scanners that contain fine geometric detail and suffer from different types of noise including topological noise caused by the coalescence of spatially close surface regions. We build on two strategies. First, using a hierarchical patch based shape representation we match shapes consistently in a coarse to fine manner, allowing for robustness to noise. This multi-scale representation drastically reduces the dimensionality of the problem when matching at the coarsest scale, rendering unsupervised learning feasible. Second, we constrain this hierarchical matching to be reflected in 3D by fitting a patch-wise nearrigid deformation model. Using this constraint, we leverage spatial continuity at different scales to capture global shape properties, resulting in matchings that generalize well to data with different deformations and noise characteristics. Experiments demonstrate that our approach obtains significantly better results on raw 3D scans than stateof-the-art methods, while performing on-par on standard test scenarios.
MERROUCHE ET AL: DEFORMATION-GUIDED UNSUPERVISED SHAPE MATCHING 1 Deformation-Guided Unsupervised Non-Rigid Shape Matching
[ { "figure_caption": "Figure 3 :3Figure 3: Example of surface patches. From left to right : 800, 200 and 50 patches along with their centers on a mesh with ≈ 13k vertices.", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Feature extractor architecture. Inputs X and Y are decomposed into a hierarchy of 800, 200 and 50 patches and fine-to-coarse features are extracted. Input features at the vertex level are 3D coordinates and normals.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure5: of the association network. Given meshes X and Y it outputs coarseto-fine association maps at every level of the patch hierarchy, i.e. Π l X →Y and Π l Y→X for every level l.", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "Pre-processed dataRaw 3D acquisitionsMethodBody shape and poseSampling densityTopologyNakedClothedUniform 15K vert. Non-uniform 5K vert.Deep Shells13.390.06949.110.056417.100.140920.48 0.1608 16.40 0.0775 26.49 0.2275Neuromorph11.610.131514.450.152737.870.538911.92 0.1231 13.83 0.1407 16.01 0.1429AttentiveFMaps2.650.01712.470.01602.820.038518.93 0.3426 11.38 0.1608 49.19 0.7701Ours4.090.02594.150.03045.850.05528.22 0.04413.74 0.0349 6.02 0.0463", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Loss weights for every level.", "figure_data": "", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" } ]
Aymen Merrouche; João Regateiro; Stefanie Wuhrer; Edmond Boyer
[ { "authors": "Matthieu Armando; Laurence Boissieux; Edmond Boyer; Jean-Sébastien Franco; Martin Humenberger; Christophe Legras; Vincent Leroy; Mathieu Marsot; Julien Pansiot; Sergi Pujades", "journal": "", "ref_id": "b0", "title": "4dhumanoutfit: a multi-subject 4d dataset of human motion sequences in varying outfits exhibiting large displacements", "year": "2023" }, { "authors": "Jean Basset; Adnane Boukhayma; Stefanie Wuhrer; Franck Multon; Edmond Boyer", "journal": "IEEE", "ref_id": "b1", "title": "Neural human deformation transfer", "year": "2021" }, { "authors": "Federica Bogo; Javier Romero; Matthew Loper; J Michael; Black", "journal": "", "ref_id": "b2", "title": "FAUST: Dataset and evaluation for 3D mesh registration", "year": "2014" }, { "authors": "Francesco Bonarrigo; Alberto Signoroni; Mario Botsch", "journal": "Graphical Models", "ref_id": "b3", "title": "Deformable registration using patch-wise shape matching", "year": "2014" }, { "authors": "Cedric Cagniart; Edmond Boyer; Slobodan Ilic", "journal": "IEEE", "ref_id": "b4", "title": "Free-form mesh tracking: a patch-based approach", "year": "2010" }, { "authors": " Van-Toan; Trung-Thien Cao; Denis Tran; Laurendeau", "journal": "Graphical Models", "ref_id": "b5", "title": "A two-stage approach to align two surfaces of deformable objects", "year": "2015" }, { "authors": "Bailin Deng; Yuxin Yao; Roberto M Dyke; Juyong Zhang", "journal": "Computer Graphics Forum", "ref_id": "b6", "title": "A survey of non-rigid 3d registration", "year": "2022" }, { "authors": "Theo Deprelle; Thibault Groueix; Matthew Fisher; Vladimir Kim; Bryan Russell; Mathieu Aubry", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b7", "title": "Learning elementary structures for 3d shape generation and matching", "year": "2019" }, { "authors": "Nicolas Donati; Etienne Corman; Maks Ovsjanikov", "journal": "", "ref_id": "b8", "title": "Deep orientation-aware functional maps: Tackling symmetry issues in shape matching", "year": "2022" }, { "authors": "Omri Efroni; Dvir Ginzburg; Dan Raviv", "journal": "IEEE", "ref_id": "b9", "title": "Spectral teacher for a spatial student: Spectrum-aware real-time dense shape correspondence", "year": "2022" }, { "authors": "M Eisenberger; D Novotny; G Kerchenbaum; P Labatut; N Neverova; D Cremers; A Vedaldi", "journal": "", "ref_id": "b10", "title": "Neuromorph: Unsupervised shape interpolation and correspondence in one go", "year": "2021" }, { "authors": "Marvin Eisenberger; Aysim Toker; Laura Leal-Taixé; Daniel Cremers", "journal": "Advances in Neural information processing systems", "ref_id": "b11", "title": "Deep shells: Unsupervised shape correspondence with optimal transport", "year": "2020" }, { "authors": "Matthias Fey; Jan Eric Lenssen; Frank Weichert; Heinrich Müller", "journal": "", "ref_id": "b12", "title": "Splinecnn: Fast geometric deep learning with continuous b-spline kernels", "year": "2018" }, { "authors": "Dvir Ginzburg; Dan Raviv", "journal": "Springer", "ref_id": "b13", "title": "Cyclic functional mapping: Self-supervised correspondence between non-isometric deformable shapes", "year": "2020" }, { "authors": "Thibault Groueix; Matthew Fisher; Vladimir G Kim; Bryan C Russell; Mathieu Aubry", "journal": "", "ref_id": "b14", "title": "3d-coded: 3d correspondences by deep deformation", "year": "2018" }, { "authors": "Thibault Groueix; Matthew Fisher; Vladimir G Kim; Bryan C Russell; Mathieu Aubry", "journal": "Computer Graphics Forum", "ref_id": "b15", "title": "Unsupervised cycle-consistent deformation for shape matching", "year": "2019" }, { "authors": "Oshri Halimi; Or Litany; Emanuele Rodola; Alex M Bronstein; Ron Kimmel", "journal": "", "ref_id": "b16", "title": "Unsupervised learning of dense shape correspondence", "year": "2019" }, { "authors": "Matthew Michael Kazhdan; Hugues Bolitho; Hoppe", "journal": "", "ref_id": "b17", "title": "Poisson surface reconstruction", "year": "2006" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b18", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "Itai Lang; Dvir Ginzburg; Shai Avidan; Dan Raviv", "journal": "IEEE", "ref_id": "b19", "title": "Dpc: Unsupervised deep point correspondence via cross and self construction", "year": "2021" }, { "authors": "Lei Li; Nicolas Donati; Maks Ovsjanikov", "journal": "", "ref_id": "b20", "title": "Learning multi-resolution functional maps with spectral attention for robust shape matching", "year": "2022" }, { "authors": "Or Litany; Tal Remez; Emanuele Rodola; Alex Bronstein; Michael Bronstein", "journal": "", "ref_id": "b21", "title": "Deep functional maps: Structured prediction for dense shape correspondence", "year": "2017" }, { "authors": "Matthew Loper; Naureen Mahmood; Javier Romero; Gerard Pons-Moll; Michael J Black", "journal": "ACM transactions on graphics (TOG)", "ref_id": "b22", "title": "Smpl: A skinned multi-person linear model", "year": "2015" }, { "authors": "Mathieu Marsot; Stefanie Wuhrer; Jean-Sébastien Franco; Stephane Durocher", "journal": "", "ref_id": "b23", "title": "A structured latent space for human body motion generation", "year": "2022" }, { "authors": "Federico Monti; Davide Boscaini; Jonathan Masci; Emanuele Rodola; Jan Svoboda; Michael M Bronstein", "journal": "", "ref_id": "b24", "title": "Geometric deep learning on graphs and manifolds using mixture model cnns", "year": "2017" }, { "authors": "Vincent Nivoliers; Bruno Lévy; Christophe Geuzaine", "journal": "Journal of Computational and Applied Mathematics", "ref_id": "b25", "title": "Anisotropic and feature sensitive triangular remeshing using normal lifting", "year": "2015" }, { "authors": "Maks Ovsjanikov; Mirela Ben-Chen; Justin Solomon; Adrian Butscher; Leonidas Guibas", "journal": "ACM Transactions on Graphics (ToG)", "ref_id": "b26", "title": "Functional maps: a flexible representation of maps between shapes", "year": "2012" }, { "authors": "Gabriel Peyré; Laurent D Cohen", "journal": "International Journal of Computer Vision", "ref_id": "b27", "title": "Geodesic remeshing using front propagation", "year": "2006" }, { "authors": "Jean-Michel Roufosse; Abhishek Sharma; Maks Ovsjanikov", "journal": "", "ref_id": "b28", "title": "Unsupervised deep learning for structured shape matching", "year": "2019" }, { "authors": "Abhishek Sharma; Maks Ovsjanikov", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b29", "title": "Weakly supervised deep functional maps for shape matching", "year": "2020" }, { "authors": "Ivan Sipiran; Benjamin Bustos", "journal": "", "ref_id": "b30", "title": "A fully hierarchical approach for finding correspondences in non-rigid shapes", "year": "2013" }, { "authors": "Nitika Verma; Edmond Boyer; Jakob Verbeek", "journal": "", "ref_id": "b31", "title": "Feastnet: Feature-steered graph convolutions for 3d shape analysis", "year": "2018" }, { "authors": "Dong-Ming Yan; Bruno Lévy; Yang Liu; Feng Sun; Wenping Wang", "journal": "Wiley Online Library", "ref_id": "b32", "title": "Isotropic remeshing with fast and exact computation of restricted voronoi diagram", "year": "2009" }, { "authors": "Yiming Zeng; Yue Qian; Zhiyu Zhu; Junhui Hou; Hui Yuan; Ying He", "journal": "", "ref_id": "b33", "title": "Corrnet3d: Unsupervised end-to-end learning of dense correspondence for 3d point clouds", "year": "2021" }, { "authors": "Yi Zhou; Connelly Barnes; Lu Jingwan; Yang Jimei; Li Hao", "journal": "", "ref_id": "b34", "title": "On the continuity of rotation representations in neural networks", "year": "2019-06" }, { "authors": "Silvia Zuffi; Angjoo Kanazawa; David W Jacobs; Michael J Black", "journal": "", "ref_id": "b35", "title": "3d menagerie: Modeling the 3d shape and pose of animals", "year": "2017" } ]
[ { "formula_coordinates": [ 5, 34.87, 338.46, 358.7, 24.77 ], "formula_id": "formula_0", "formula_text": "X l = (x l i ∈ R d l ) 1≤i≤n l and Y l = (y l i ∈ R d l ) 1≤i≤m l ," }, { "formula_coordinates": [ 5, 34.87, 398.13, 101.93, 14.16 ], "formula_id": "formula_1", "formula_text": "X L,l = (x L,l i ∈ R d l ) 1≤i≤n l ." }, { "formula_coordinates": [ 5, 72.72, 510.66, 322.15, 27.13 ], "formula_id": "formula_2", "formula_text": "(Π l X →Y ) i j := exp(s l i j ) ∑ m l k=1 exp(s l ik ) (1) (Π l Y→X ) i j := exp(s l ji ) ∑ n l k=1 exp(s l ki )(2)" }, { "formula_coordinates": [ 5, 70.02, 539.18, 157.42, 14.16 ], "formula_id": "formula_3", "formula_text": "s l i j := ⟨x L,l i , y L,l j ⟩ 2 / ∥x L,l i ∥ 2 ∥y L,l j ∥ 2 ." }, { "formula_coordinates": [ 6, 248.29, 158.4, 121.33, 9.9 ], "formula_id": "formula_4", "formula_text": "x i (v) = R i (x 0 (v) -c i ) + u i ∈ R" }, { "formula_coordinates": [ 6, 139.75, 221.28, 255.12, 21.36 ], "formula_id": "formula_5", "formula_text": "l rig (X ) = ∑ (P i ) 1≤i≤n ∑ P j ∈N (P i ) ∑ v∈P i P j E i j v with(3)" }, { "formula_coordinates": [ 6, 130.45, 251.59, 260.94, 14.45 ], "formula_id": "formula_6", "formula_text": "E i j v∈P i P j = (α i (v) + α j (v)) × ∥x i (v) -x j (v)∥ 2 2 . (4" }, { "formula_coordinates": [ 6, 391.39, 254.96, 3.48, 7.77 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 6, 93.08, 515.88, 301.79, 25.2 ], "formula_id": "formula_8", "formula_text": "l network = L ∑ l=0 (λ l g l l geodesic + λ l c l l cycle + λ l r l l rec + λ l m l l match + λ l ri l l rigidity ),(5)" }, { "formula_coordinates": [ 7, 112.3, 53.89, 279.31, 15.43 ], "formula_id": "formula_9", "formula_text": "l l geodesic = ∥Π l X →Y D l Y Π l X →Y T -D l X ∥ 2 2 + ∥Π l Y→X D l X Π l Y→X T -D l Y ∥ 2 2" }, { "formula_coordinates": [ 7, 34.37, 144.34, 360.51, 28.05 ], "formula_id": "formula_10", "formula_text": "l l cycle = ∥Π l X →Y (Π l Y→X C X l ) -C X l ∥ 2 2 + ∥Π l Y→X (Π l X →Y C Y l ) - C Y l ∥ 2 2 ." }, { "formula_coordinates": [ 7, 170.11, 206.68, 194.25, 13.53 ], "formula_id": "formula_11", "formula_text": "l l rec = ∥Π l X →X C X l -C X l ∥ 2 2 + ∥Π l Y→Y C Y l -C Y l ∥ 2 2" }, { "formula_coordinates": [ 7, 203.86, 279.71, 160.9, 14.5 ], "formula_id": "formula_12", "formula_text": "X l l -Π l X →Y C Y l ∥ 2 2 + ∥C Y l l -Π l Y→X C X l ∥ 2 2" }, { "formula_coordinates": [ 8, 115.47, 120.75, 279.4, 22.82 ], "formula_id": "formula_13", "formula_text": "MGE(Π X →Y ) := 1 |X | ∑ x∈X d Y (Π X →Y (x), Π gt X →Y (x)),(6)" }, { "formula_coordinates": [ 8, 64.36, 171.83, 330.51, 28.33 ], "formula_id": "formula_14", "formula_text": "CycleGE(Π X →Y ) := 1 |X | 2 ∑ x 1 ,x 2 ∈X ×X |1 - d X (x 1 , x 2 ) d X ( x1 , x2 ) | with xi := Π Y→X (Π X →Y (x i )),(7)" } ]
2024-03-31
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b4", "b67", "b52", "b62", "b59", "b74", "b75", "b48", "b54", "b74", "b79", "b63", "b50", "b53", "b70", "b47", "b66", "b62", "b15", "b25", "b29", "b20", "b62", "b75", "b74", "b75" ], "table_ref": [], "text": "Human avatar reconstruction has experienced rapid development in recent years and shown great potential for applications of AR/VR, metaverse, etc [5,68]. One of the challenges in this field is the data acquisition. Previous works typically require an expensive setup for multi-view RGB video [53], textured scan video [63], or multi-view static images [60]. Recently, there is a tendency to utilize easily accessible data sources for this task, such as monocular RGB video [75] or image set [76].\nThis leads to a pivotal question: \"Is it possible to leverage a cheaper data source to reconstruct human avatars?\" Intuitively, the cheaper data should be characterized by its limited quantity and the unconstrained nature of human behaviors it captures. As this paper will demonstrate, the answer leans toward the affirmative, and we refer to such a data source as few-shot unconstrained images. These data can be obtained from either a personal photo album or key frames of a video capture. Thereby, we are inspired to explore the human avatar reconstruction task with the fewshot unconstrained images.\nFrom a technical perspective, we carefully design a 3D representation for the difficulties caused by the aforementioned data setting. When it comes to modeling dynamic humans, a popular idea is the use of a dynamic neural radiance field (NeRF) [49]. Nevertheless, despite various dynamic designs being reported [4, 55,75,80], accurately driving a volume space using limited data remains challenging. Recently, deep marching tetrahedra (DMTet) [64] proposes a hybrid method to produce a triangle mesh through a differentiable process. Due to the ease of mesh deformation, we are motivated to model dynamic humans with the DMTet representation. Specifically, we integrate skinning weights and blendshapes defined by SMPLX [51] with the static DMTet to create a drivable tetrahedral representation for adapting the unconstrained data. Furthermore, conducive to the few-shot task, the driveable representation allows the use of canonical SMPLX shape as the prior to make the tetrahedral grid well-initialized. In this manner, we extend the traditional static-scene DMTet to model the articulated human body under a dynamic condition.\nIn terms of learning from a limited amount of data, we employ the score distillation sampling (SDS) [54,71] technique to generate plausible textures for unseen regions. Different from existing SDS-based image-to-3D tasks that rely on textual descriptions or captions [48,67], we directly utilize the image as the prompt [44] so that faithful visual feature can be preserved. We refer to the SDS-based optimization as a few-shot guidance. Additionally, we incorporate traditional reconstruction optimization, namely fewshot reference, in our pipeline.\nOur overall framework is called HaveFun, Human AVatar rEconstruction from Few-shot UNconstrained images. Note that the human body and hand have distinct properties. The body is characterized by intricate geometry (e.g., hair/cloth) and remarkable facial features, while the hand exhibits smooth bare geometry and subtle palm wrinkles. To evaluate our framework for the human body and hand, we develop benchmarks for them with the assistance of XHumans [63] and DART [16]. Remarkably, HaveFun effectively addresses both scenarios of the human body and hand. As a result, our approach can reconstruct human avatars with few-shot (as few as 2) dynamic images and achieve realistic rendering quality. Besides, we can perform avatar animation in various unseen human poses.\nOur main contributions are summarized as follows:\n• We propose a novel framework, termed HaveFun, to solve the challenging problem of human avatar reconstruction from few-shot unconstrained images. • We explore a drivable tetrahedral representation for ar-ticulated human motion and an SDS loss for non-static human reconstruction.\n• We develop benchmarks for the few-shot dynamic body/hand reconstruction task. Extensive evaluations indicate our method outperforms previous one-shot [26] or video-based [30] approaches by a large margin.\nWe believe our endeavors would enhance the practical significance of this research area, paving a new way for human avatar reconstruction and real-world applications. [21,63], or image set [76]. Also, many of them utilize NeRF or its variations as the 3D representation and craft a motion field to connect the gap between body articulation and the canonical NeRF space. For example, HumanNeRF [75] created a personalized avatar by training an inverse skinning field using monocular video data. LISA [9] utilized multi-view video data to learn hand appearance and incorporated a multi-layer perceptron (MLP) to forecast skinning weights for hand animation. Person-NeRF [76] gathered hundreds of images of a human individual and generated an avatar with disentangled attributes. Differing from the majority of prior studies, our focus lies in addressing the few-shot body reconstruction challenge. Moreover, we explore articulation-friendly tetrahedral grid as the 3D representation." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b5", "b11", "b13", "b31", "b76", "b12", "b56", "b0", "b72", "b9", "b59", "b60", "b77", "b78", "b59", "b10", "b34", "b49", "b80", "b49", "b55", "b47", "b53", "b66", "b70", "b53", "b25" ], "table_ref": [], "text": "Few-shot human avatar creation. Many research efforts rely on pre-trained generative models [3,6,12,14,32,66,77] and accomplish one-shot reconstructions through GAN inversion techniques [13,57]. This line of research typically requires only a single image to recover a latent representation that aligns with the ground truth. Nevertheless, these pipelines often struggle to accurately represent the data that falls outside the GAN distribution [1,73]. Instead of GAN inversion, another approach involves directly extracting image features and predicting a pixel-aligned implicit field to represent the human [10,60,61,78,79]. As a groundbreaking work, PIFu [60] employed MLPs to model both the occupancy value and color of the human body from one or several images. Despite the few-shot setting in the inference phase, the training of pixel-aligned methods still demands a large-size image set. Further, they treat the human as a static scene, neglecting the dynamic nature of the human body. By contrast, this paper can handle few-shot dynamic human images without the need for additional auxiliary data collection.\nSparse-view 3D reconstruction. When it comes to fewshot reconstruction, traditional methods typically optimize a NeRF using sparse-view data with the aid of geometry regularization, semantic consistency, depth supervision, etc. [11,29,35,46,50,81]. For example, RegNeRF [50] introduced a patch regularizer to mitigate geometry artifacts and employed a log-likelihood model to ensure multi-view appearance consistency. In contrast to this line, we utilize a pre-trained diffusion model to regularize unseen textures from novel viewpoints. Moreover, in contrast to static scenes typically handled by related works, our approach enables dynamic body reconstruction using sparse-view data.\nAvatar creation with text-based priors.\nRecently, textto-3D task has gained popularity, thanks to the pre-trained language-vision models [22,56]. Many reports have achieved remarkable performance using the diffusion model [41,48,54,67,71]. As a pioneering effort, DreamFusion [54] proposed SDS loss to optimize a NeRF with text prompts. Furthermore, text-guided strategies have been incorporated into the field of human avatar creation [2, 23-26, 31, 36, 40, 83]. These approaches allow for the generation of a realistic 3D human appearance that aligns with text semantics. While the text-based paradigm is effective in creating famous characters, it is more challenging when it comes to reconstructing actual human individuals. For example, TeCH [26] accomplished one-shot human avatar reconstruction with 5 stages of VQA [37] caption, Dream-Booth [59] fine-tuning, geometry optimization, geometry post-processing, and texture optimization. In contrast, our method can be trained in an end-to-end manner. Hence, we believe our approach is inherently more elegant, precise, and accurate than text-based methods for the reconstruction task since we directly employ image features as guidance without the potential ambiguity caused by image captions." }, { "figure_ref": [ "fig_0" ], "heading": "Approach", "publication_ref": [], "table_ref": [], "text": "Given a personalized unconstrained photo album I = {I i } N i=1 (N ≤ 8), this paper aims to reconstruct a 3D representation G for free-viewpoint rendering and free-pose animation. G takes explicit viewpoint R ∈ R d R , human articulated poses θ ∈ R d θ , and expression coefficients ψ ∈ R d ψ as the input and generates a 2D image Î:\nG : (R, θ, ψ) ∈ R d R × R d θ × R d ψ → Î ∈ R H×W ×3 ,(1)\nwhere H = W = 512 denote the image height and width. The 3D representation G = {M, C, W, E} includes a triangular mesh M, a texture field C, skinning weights W, and expression blendshapes E. As shown in Fig. 2, we build a drivable tetrahedral representation as the core of G to produce {M, C, W, E} (Sec. 3.2). Additionally, we design two phases of few-shot reference and few-shot guidance to train our framework (Sec. 3.3). We describe each part in detail below." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [ "b41", "b42", "b44", "b64", "b55", "b53" ], "table_ref": [], "text": "Deep marching tetrahedra. DMTet is a hybrid representation for 3D geometry, denoted as (V t , T), where V t , T are vertices and the tetrahedral indices, respectively. Each tetrahedron t ∈ T is represented by four vertices {v t a , v t b , v t c , v t d }. Each vertex v t = (x, y, z) ∈ V t has a 3D position vector and a signed distance value s. If two vertices in a tetrahedron have different signs of s (e.g., v t a with s a < 0 and v t b with s b > 0), they can determine a vertex v m in triangular mesh M:\nv m = (v t a + δv t a ) • s t b -(v t b + δv t b ) • s a s b -s a ,(2)\nwhere δv represents the estimated vertex displacement. As a result, the triangular mesh can be generated using the differentiable volume subdivision method. However, the DMTet cannot represent an articulated dynamic object.\nViewpoint-conditioned diffusion model. Zero123 [44] and follow-up works [42,43,45,65] introduce a denoising diffusion model that leverage posed CLIP embedding, involving both the visual CLIP feature [56] and the camera viewpoint δR, as the conditioning elements. Consistent with the traditional diffusion model [22], Zero123 has a forward sampling process:\nz t = √ ᾱt I + (1 -ᾱt )ϵ, ϵ ∼ N (0, 1),(3)\nwhere ᾱt is a hyperparameter and z t is noising image at the t-th step. Subsequently, the noise prediction model ε is optimized to estimate the added noise ϵ:\nmin E t,ϵ ∥ϵ -ε (z t , t, CLIP(I), δR))∥ 2 2 . (4\n)\nAfter training, the model can generate samples from an arbitrary viewpoint given an image I:\nI δR = Zero123(I, δR).(5)\nThough free-viewpoint data can be generated using the pure 2D pipeline, it lacks rigorous 3D consistency and is unable to control articulated human poses. Score distillation sampling. While the diffusion model exhibits remarkable proficiency in generating 2D images, it is constrained by its inherent incapacity to produce a 3D representation directly. For 3D object generation from text prompt, DreamFusion proposes the SDS loss [54] that optimized a 3D representation with diffusion guidance, which can be formulated as\n∇ η L SDS ≜ E t,ϵ w(t) (ε (z t , t, y) -ϵ) ∂x ∂η ,(6)\nwhere η represents the optimization parameters; w is a weighting function that depends on the timestep; and y is the text-conditioned feature. " }, { "figure_ref": [], "heading": "Drivable Tetrahedral Representation", "publication_ref": [], "table_ref": [], "text": "! ! \" # ! \" $ ! % \"#$%& Skinning w/ # ' , $ ' Skinning w/ # ! , $ ! Skinning w/ # ' , $ ' Few-Shot Reference Skinning w/ # ! , $ ! Few-Shot Guidance % ()( % ()( Zero123 Zero123 + + + + ! ' \" # ' \" $ ' Aligned-view rendering Backpropagation Random-view rendering % \"#$%&" }, { "figure_ref": [ "fig_0" ], "heading": "Drivable Tetrahedral Representation", "publication_ref": [ "b63", "b46", "b50", "b50", "b46", "b16" ], "table_ref": [], "text": "Following DMTet [64], we employ a hybrid representation for 3D geometry. As introduced in Sec 3.1, this method depends on a pre-defined tetrahedral grid, along with learnable vertex displacements δV t = {δv t } and signed distance values S = {s}, to represent an arbitrary 3D geometry. Based on its volume subdivision method, the triangular mesh M = (V m , F) can be obtained with vertices V m = {v m } and faces F.\nNevertheless, the tetrahedral grid can only represent a static scene, while the human body has dynamic articulated poses. To tackle human articulated motion, we introduce the skinning mechanism [47] for the generated mesh. As for each vertex v m , we find the nearest triangle on a parametric mesh [51,58] with vertices {v p 1 , v p 2 , v p 3 }. The skinning weights and expression blendshapes of v m can be formulated as follows,\nW v m = uW p v p 1 + vW p v p 2 + γW p v p 3 E v m = uE p v p 1 + vE p v p 2 + γE p v p 3 ,(7)\nwhere u, v, γ denote the barycentric coordinates of the projection of v m onto the face; W p , E p are the skinning weights and blendshapes defined by SMPLX [51] or MANO [58]. With the retrieval of W, E, mesh vertices can be deformed to the posed space:\nṽm = B b=1 W v m ,b G b (θ, J)G(0, J) -1 (v m +E v m ψ), (8\n)\nwhere G is the kinematic transformation matrix, b indexes articulated bones, and J denotes bone joints. Please refer to SMPL [47] for more details of the skinning mechanism.\nConsidering the limited quantity of training data and various articulated poses, the initialization of the tetrahedral grid is important yet non-trivial. Benefiting from the aforementioned drivable mechanism, we can use a canonical SMPLX template mesh for the initialization, as shown in Fig. 2. This approach allows us to incorporate human geometry prior into the 3D representation.\nIn addition, a texture field is adopted for colored appearance. Following Get3D [17], we find the surface points P s on M that align with pixels by the rasterization of the deformed mesh ({ṽ m }, F). Then, the texture field C with MLPs can predict RGB values Î as follows,\nC(P s ) : P s ∈ R H×W ×3 → Î ∈ R H×W ×3 .\n(9)" }, { "figure_ref": [], "heading": "Optimization", "publication_ref": [ "b4", "b50", "b7" ], "table_ref": [], "text": "We use few-shot images {I i } N i=1 to optimize the drivable tetrahedral representation with the optimization parameter η G = {δV t , S, C}. For each image, we use off-the-shelf tools [5,51] to obtain parametric geometry {θ i , ψ i } N i=1 with pose and expression coefficients. Given aligned or novel viewpoints R align , R novel , images can be rendered with our model:\nÎalign i = G(R align i , θ i , ψ i ) Înovel i = G(R novel , θ i , ψ i ). (10\n)\nFew-shot reference for body. In order to enhance the ability to express human body characteristics with fewshot images {I i } N i=1 , we use off-the-shelf tools [15, 78] to estimate the mask {M i } N i=1 , normal {N i } N i=1 , and depth {D i } N i=1 . Then, we design reconstruction losses as follows,\nL texture = LPIPS( Îalign i , I i ) + || Îalign i -I i || 2 2 L normal = 1-< Nalign i , N i > L depth = cov( Dalign i ,Di) σ Dalign i σ D i L mask = || Malign i -M i || 2 2 , (11\n)\nwhere < •, • > represents cosine similarity, LPIPS computes perceptual similarity [84]. L depth is formulated as pearson correlation coefficient [8].\nTherefore, L recon = L texture + L normal + L depth + L mask for body avatar reconstruction. In addition, we render high-resolution (i.e., 256 × 256) hand/head regions to compute L hand recon , L head recon using Eq. ( 11) and add them to the final L recon .\nFew-shot reference for hand. The properties of the hand are distinctive from the body, i.e., subtle wrinkles and bare geometry. Hence, the benefits from normal/depth supervision are limited and we only use L texture and L mask in Eq. ( 11) for hand optimization. In addition, we design a Laplacian constraint for geometry smoothness as follows,\nL lap = ||L M • nM || 2 , (12\n)\nwhere L M and nM denote Laplacian matrix and vertex normals for the generated triangle mesh M. Finally, L recon = L texture + L mask + λ lap L lap for hand avatar creation.\nFew-shot guidance. Few-shot images cannot cover complete visual features of the human body because of sparse viewpoints and articulated self-occlusion. Therefore, we improve random-view rendering with the diffusion prior. Specifically, pre-trained Zero123 [44] is employed as the prior model, which takes image features and viewpoints as the condition to produce random-viewpoint images, as described in Sec. 3.1. Given Înovel i , we first use Eq. (3) to sample nosing image z i,t with gaussian noise ϵ and compute relative viewpoint δR based on R align i and R novel . To generate gradients to optimize the 3D representation, we employ SDS loss derived by Zero123 as follows,\n∇ η G L sds = E t,ϵ w(t) (ε (z i,t , t, CLIP(I i ), δR) -ϵ) ∂x ∂η G .\n(13) Overall, L = L recon +λ sds L sds is employed to optimize the 3D representation G for creating human avatars. " }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b73", "b62", "b15", "b4" ], "table_ref": [], "text": "We build new dataset benchmarks to investigate the task of few-shot dynamic human reconstruction. The purpose of our dataset is to generate training data with casual human poses and conduct multi-view evaluations under the canonical human pose. To present sufficient information with fewshot data, extremely articulated self-occlusion is neglected.\nFor quantitative metrics, we report LPIPS [84], PSNR, and SSIM [74] to reflect rendering quality.\nFS-XHumans. XHumans [63] offer 3D clothed human scans with 20 different identities and various poses. For each identity, we choose 8 scans with different poses and render them from different viewpoints to create our training data. Furthermore, as XHumans do not provide canonicalpose data, we select the scan from their dataset that most closely matches the A-pose and render it from 24 spherically distributed viewpoints for evaluation. FS-DART. We utilize DART [16], a hand texture model, to generate 100 hand identities with distinct shapes and textures. To acquire training poses, we collect real hand poses using a monocular reconstruction method [5]. Each hand identity has 8 training data with varying poses and viewpoints. For evaluation, we render zero-pose hand samples from 24 spherically distributed viewpoints.\nPlease refer to suppl. material for more dataset details." }, { "figure_ref": [ "fig_3", "fig_4", "fig_3", "fig_4" ], "heading": "Ablation Studies", "publication_ref": [ "b25", "b29" ], "table_ref": [ "tab_1" ], "text": "Considering that the primary information can be presented on the palm and back of the hand, we use the 2-shot setting for hand ablation studies. In contrast, the lateral body holds important information, so we employ the 4-shot setting to the effect of SDS should be compatible with unseen areas. That is, in contrast to the 4-shot setting, the 2-shot reconstruction exhibits a heightened reliance on the SDS loss due to the lack of information. As shown in Table 1, λ sds offers different optimal choices for varying training data amounts. The effect of SDS loss is also demonstrated in Fig. 4. As indicated by arrows, λ sds = 0 prevents the model from presenting reasonable unseen textures. Besides, over-size SDS constraints would lead to the problem of color distortion, thereby harming the reconstruction quality. As a result, we use λ sds = 0.05, 0.01, 0.01 for 2-, 4-, and 8-shot reconstruction settings.\nEffect of Laplacian normal constraint. Instead of ground-truth normal/depth supervision, we design a Laplacian normal loss to regularize hand geometry. As illustrated in Fig. 5, λ lap = 0 induces fractured geometry, and λ lap = 5 can produce a over-smooth geometry. These two situations are not suited to represent the hand. That is, the hand has subtle palm wrinkles and texture. If the geometry can effectively capture wrinkles, it could contribute to generating faithful textures. Surprisingly, our experiment shows that the proper Laplacian normal constraint can achieve this. As shown, λ lap = 1 promotes our model to represent detailed hand shapes. Benefiting from the informative geometry, the texture reconstruction is more accurate, as indicated by arrows. [26] only supports single-image reconstruction. SelfRecon [30] results follow the format of \"8-/100-shot\", whereas our metrics are provided for \"2-/4-/8-shot\" tasks. For FS-DART, all metrics are for \"2-/4-/8-shot\" tasks.\nAblation study on N -shot reconstruction. Our framework can reconstruct a human avatar with an arbitrary quantity of images. We argue that one-shot data cannot furnish adequate information for human reconstruction, so we demonstrate 2-, 4-, and 8-shot tasks as examples. Referring to our results in Table 2, 2-shot metrics are very close to those of the 8-shot task, indicating our approach is adept at reconstructing humans from minimal data amount. When it comes to the hand reconstruction task, some 2-shot results are even better than those of the 8-shot task. The primary reason is that the hand feature is predominantly influenced by the hand palm or back, with the lateral hand contributing only minimal additional information.\nSimilar conclusions can be observed in Figs. 4 and5. Visually, the 8-shot task shows the best quality with faithful hair geometries, cloth textures, palm wrinkles, etc." }, { "figure_ref": [ "fig_5" ], "heading": "Comparison with Prior Arts", "publication_ref": [ "b29", "b25" ], "table_ref": [ "tab_1" ], "text": "We compare our approach with SelfRecon [30] and TeCH [26]. The former can take dynamic images or videos as the input, whereas the latter uses the DMTet and SDS prior for static human reconstruction. We present typical results in Figs. 6 and7. As shown, SelfRecon tends to generate overly smooth human appearances and inherently fails to learn effectively from limited data. Moreover, due to the absence of expression handling, SelfRecon still struggles to produce a plausible portrait, even when trained using video (100-shot) data. Text-based TeCH can produce detailed texture and sound geometry with a single image. However, TeCH cannot perform a faithful reconstruction. As shown in the first row of Fig. 7, facial identity cannot preserved by TeCH. Furthermore, features from the text caption would be improperly introduced to the avatar. Referring to the second row of Fig. 7, the front head is aligned with the reference image while the lateral head is dominated by the BLIP [37] caption of \"caucasian\". Hence, since the caption cannot perfectly describe visual details, the text-based approach is akin to a generative method rather than a reconstructive one. In contrast, thanks to pure visual prompts from few-shot images, our method can perform a faithful avatar reconstruction for the body, face, and hand.\nIt is worthwhile to revisit our 2-and 8-shot results again. As shown in the first row of Fig. 7, the 2-shot result is better than the 8-shot one in terms of portrait reconstruction. This discrepancy arises from the diverse expressions presented in training data. That is, the 2-shot task relies on a single image to describe the face, while the 8-shot task involves lateral body data with various facial expressions. Due to the imperfect expression blendshapes from SMPLX, the performance of 8-shot portrait reconstruction from data with diverse expressions is constrained. In addition, the 8-shot reconstruction yields enhanced results for hands and lateral textures owing to the expanded visibility of data.\nFor quantitative comparison, our method achieves the best results in all metrics, as shown in Table 2, indicating superior rendering quality of the HaveFun framework." }, { "figure_ref": [], "heading": "Applications", "publication_ref": [], "table_ref": [], "text": "Animation results. Thanks to the drivable tetrahedral representation, our method can perform free-pose articulated deformation for the body and hand so that complex human motion can be presented, as shown in Fig. 8.\nReconstructing human avatars with real-world casual capture. The FS-DART is a synthetic dataset, and the FS-XHumans provides real human images but captured in a stu- dio. Therefore, we acquire real-world unconstrained images to validate our approach. As shown in Fig. 9, based on 2 or 4 images, our method excels in canonical-space avatar reconstruction and free-pose animation." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "This paper poses a novel research problem of human avatar reconstruction from few-shot unconstrained images. We propose a HaveFun framework with a drivable tetrahedral representation to solve this issue. To optimize our 3D representation, we design a two-phase method with few-shot reference and few-shot guidance. In addition, we develop evaluation benchmarks for the human body and hand. As a result, our approach can produce animatable human avatars with superior rendering quality, which we believe enables a new way for real-world avatar creation." }, { "figure_ref": [ "fig_2", "fig_5" ], "heading": "HAVE-FUN: Human Avatar Reconstruction from Few-Shot Unconstrained Images", "publication_ref": [], "table_ref": [], "text": "Supplementary Material Figure 10. Training data for Fig. 6 of the main text. The first N samples in rows are used for the N -shot task." }, { "figure_ref": [ "fig_2", "fig_8", "fig_10", "fig_2", "fig_9" ], "heading": "Dataset Details", "publication_ref": [ "b15", "b4", "b62", "b4", "b50" ], "table_ref": [], "text": "FS-DART. Our FS-DART is a synthetic dataset based on the DART [16] hand model. We create 100 hand identities with a variety of skin colors and hand shapes. In addition, special hand features such as scars, moles, and nail polish are also included in hand textures. As for hand poses, we capture real hand videos and extract pose parameters with MobRecon [5] for training data creation. As shown in Fig. 10, our training samples contain unconstrained casual hand poses. Note that self-occluded poses are not involved so that few-shot data can exhibit sufficient information for the hand reconstruction task.\nIn terms of evaluation, we assess the effectiveness of our method under the zero hand pose to unveil the performance in shape and texture reconstruction. Referring to Fig. 11, the hand in the zero pose is rendered from 24 sphere-distributed viewpoints, and our model can generate corresponding results for metric computation. FS-XHumans. Our FS-XHumans dataset is built on really captured XHumans [63], which is a 3D scan dataset with 19 actual human identities. For each individual, the XHumans provide 3D scans of motion sequences, including diverse body poses, hand gestures, and facial expressions. Thereby, we select 8 scans from a sequence to produce training data. During data selection, we ensure the diversity of poses and expressions for our training samples, as illustrated in Fig 12 . Due to the absence of canonical-pose samples, we opt for a scan closely resembling the A-pose to generate testing samples from 24 sphere-distributed viewpoints for metric computation, as depicted in Fig. 13.\nIt is worthwhile to note that the training data do not have to strictly follow viewpoints in Figs. 10 and12. We use this viewpoint configuration as an example because it is an efficient setting for few-shot data acquisition. The viewpoints of arbitrarily captured data can be obtained through parametric geometry estimation [5,51]. From the perspective of real-world applications, our data setup is reasonable because obtaining data similar to Figs. 10 and 12 in practical capture scenarios is straightforward." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [], "table_ref": [], "text": "Tetrahedral grid. We produce a tetrahedral grid in a 128 3 -size cube using 277,410 vertices and 1,524,684 tetrahedra. Positional displacements and an SDF value are attached to vertices, and we explicitly treat them as optimization parameters without resorting to neural networks. Texture field. To predict RGB values, we design texture filed C using a 3-layer MLP network with a hidden dimension of 64 and a hash positional encoding with a maximum resolution of 2048 and 16 resolution levels. Specifically, the triangle mesh M extracted from DMtet is deformed to match the posed human space aligned with the training im-a sks man, black short hair, serious, sks white v neck t shirt, sks gray rolled up jeans pants, sks black loafers shoes, standing a sks man, brown bald hair, serious, sks black t shirt, sks blue shorts pants, sks black black tennis shoes, standing a sks woman, brown short hair, caucasian, sks yellow v neck shirt, sks red rolled up jeans pants, sks black black and white sneakers shoes, standing a sks man, brown short hair, serious, sks black long sleeve sweater, sks brown plaid shorts pants, sks black adidas sneakers shoes, standing ages. Each pixel is mapped onto the deformed mesh surface, represented by its barycentric coordinates. Then, we query points P s on the canonical triangle mesh M with the barycentric and the rendered image can be obtained with Î = C(P s ).\nOptimization details. Our experiments are conducted on a NVIDIA A100 GPU. The whole framework is trained in an end-to-end manner.\nFor the body reconstruction task, the optimization comprises 17,000 iterations. The learning rate starts at 0.05 and is decreased by a factor of 0.1 at the 7,500th and 15,000 steps. The optimization process for a human body takes approximately 4 hours.\nIn terms of the hand reconstruction task, the optimization requires 2,000 iterations with a learning rate of 0.05. The optimization of a hand identity only costs about 10 minutes. " }, { "figure_ref": [], "heading": "Details of Compared Methods", "publication_ref": [ "b29" ], "table_ref": [], "text": "Due to the absence of existing methods designed for fewshot dynamic human reconstruction, we compare HaveFun with a video-based approach and a one-shot static pipeline.\nSelfRecon. In contrast to our data configuration, SelfRecon [30] is designed for self-rotated video data. Despite this difference, SelfRecon can perform human reconstruction under our data setup. That is, few-shot unconstrained images used in our work can be treated as key frames of a video. Hence, it is reasonable to compare our approach with SelfRecon. To this end, we acquire officially released implementation codes from https://github.com/ jby1993/SelfReconCode and re-implement the part of the dataset for the adaptation of few-shot image input. In addition, we set a batch size of 2 and a training step of 15,000. The training process costs about 12 hours for a human individual. Furthermore, we also train a SelfRecon model following its original data setting. That is, we generate video data consisting of 100 frames, containing uniformly self-rotated body images, as shown in \"SelfRecon (100-shot)\" in Fig. 7 of the main text. The SelfRecon results are also displayed in our suppl. video. As shown, the instability in geometry and texture is evident across different viewpoints due to the employed training samples with highly articulated motion and the intrinsic mechanism of viewpoint-dependent color prediction.\nFor the hand experiment, we integrate MANO articulation into SelfRecon and adopt the same settings as the body experiment.\nTeCH. TeCH is a one-shot human reconstruction method utilizing SDS guidance, similar to the technical pipeline in our HaveFun framework. For comparison, we employ the In addition, we argue that the stage of geometry post-processing is tricky due to the replacement of the hand shape with the SMPLX hand mesh. That is, the hand is reconstructed using SM-PLX rather than TeCH. For a fair experimental setup, we omit the geometry post-processing and jointly optimize the complete geometry and texture. All other settings adhere to the original TeCH report, and it takes approximately 6 hours to generate a human avatar.\nL normal L depth PSNR ↑ SSIM ↑ LPIPS ↓ 4-shot FS-XHumans ✓ ✓25\nAs the VQA caption of hands is unexplored, we do not include the comparison of TeCH in the hand task." }, { "figure_ref": [ "fig_12", "fig_3", "fig_4", "fig_5" ], "heading": "More Results", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "Effects of normal and depth losses. Referring to Table 3 and Fig. 14, normal and depth losses give rise to instructive effects on human avatar reconstruction. Nevertheless, removing depth loss only leads to a minor performance drop. Due to the often inaccurate estimates of monocular depth, depth supervision is optional in real-world applications, and the HaveFun framework can present human avatars without depth labels.\nSDS loss for the 8-shot task. Table 4 shows the effect of SDS loss in the 8-shot FS-XHumans experiment, which also supports the conclusion of the main text.\nSide-view results of the 4-shot setting. In Fig. 4 of the main text, we use a side-view for 2/8-shot tasks to highlight the details of hair reconstruction and another view for the 4-shot task to unveil the SDS effect for unseen regions. To fully present these experiments, we supplement 4-shot sideview results in Fig. 15 for comparison. The Zero123 guidance. As shown in the Fig. 16, the purely 2D method Zero123 produces low-quality guidance images (e.g., face). Our model achieves performance beyond Zero123 because of a 3D-aware representation and depth/normal supervision. More results in dynamic demonstration. Please refer to the project page https://seanchenxy.github. io/HaveFunWeb for dynamic results." }, { "figure_ref": [ "fig_16", "fig_6" ], "heading": "Limitations and Future Works", "publication_ref": [ "b50", "b69" ], "table_ref": [], "text": "Expression control. To handle varying expressions in training data, we transform expression blendshapes defined by SMPLX [51] into our framework. Nevertheless, the blendshapes are not accurate enough, harming the precision of expression control. The impact on portrait reconstruction is explained in Fig. 7 of the main text. To tackle this difficulty, we will introduce advanced expression control methods (e.g., [70]) to the HaveFun framework. Full body integration with part-wise few-shot data. This paper streamlines the data collection process and proves that few-shot unconstrained images are cheaper data sources for human avatar creation. In addition, we demonstrate that such a cheap data source is effective for the human body and hand. Nevertheless, we have not used the HaveFun framework for expressive portrait reconstruction. On one hand, because of the aforementioned limitations on facial expression, the HaveFun framework has difficulty in precise expression modeling. In addition, enhancing the accuracy of expression control is far from sufficient for modeling the portrait. For example, because of the lack of inner-mouth regions in the few-shot training data, the avatar is unable to perform a behavior with an open mouth (see Fig. 17(a)). Therefore, we will explore a few-shot unconstrained data setup for portrait reconstruction. Finally, the portrait, body, and hand can be reconstructed from part-wise few-shot data and integrated into a full representation for an expressive human avatar.\nErrors caused by data pre-processing. As illustrated in Fig. 17, inaccurate image matting results in the introduction of background color to the human texture (Fig. \n17" }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgment The work was supported in part by the Basic Research Project No.HZQB-KCZYZ-2021067 of Hetao Shenzhen-HK S&T Cooperation Zone, Guangdong Provincial Outstanding Youth Project No. 2023B1515020055, the National Key R&D Program of China with grant No.2018YFB1800800, by Shenzhen Outstanding Talents Training Fund 202002, by Guangdong Research Projects No.2017ZT07X152 and No.2019CX01X104, by Key Area R&D Program of Guangdong Province (Grant No.2018B030338001), by the Guangdong Provincial Key Laboratory of Future Networks of Intelligence (Grant No.2022B1212010001), and by Shenzhen Key Laboratory of Big Data and Artificial Intelligence (Grant No.ZDSYS201707251409055). It is also partly supported by NSFC-62172348, Shenzhen General Project No. JCYJ20220530143604010 and China National Postdoctoral Program for Innovative Talents No. BX2023004." } ]
Canonical avatar reconstruction Free-pose animation Figure 1. Given a few images with various viewpoints and articulated poses, our approach can reconstruct an animatable human avatar.
HAVE-FUN: Human Avatar Reconstruction from Few-Shot Unconstrained Images
[ { "figure_caption": "Figure 2 .2Figure 2. Overview of HaveFun framework. Based on the DMTet, we design a driveable tetrahedral representation with the skinning mechanism. In terms of optimization, we employ loss functions based on reference-data reconstruction and SDS guidance to create human avatars from few-shot unconstrained images.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Training data for ablation studies in Figs. 4 and 5. Blue and green boxes indicate 2-and 4-shot training data, respectively.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Table 1 .1The effects of SDS and Laplacian normal losses.", "figure_data": "", "figure_id": "fig_2", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Ablation studies on the few-shot body reconstruction task. Zoom in to see details.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Ablation studies on the few-shot hand reconstruction task. Zoom in to see details.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Comparison of hand reconstruction on FS-DART. See suppl. material for training data.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "TeCH ( 1 Figure 7 .17Figure 7. Comparison of body reconstruction on FS-XHumans, where TeCH is a 1-shot method and SelfRecon is illustrated with 8-shot or video (100-shot) data training. See suppl. material for training data.", "figure_data": "", "figure_id": "fig_6", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "Figure 8 .Figure 9 .89Figure 8. Articulated animation of human avatars.Real-world imagesCanonical avatar Avatar animation", "figure_data": "", "figure_id": "fig_7", "figure_label": "89", "figure_type": "figure" }, { "figure_caption": "Figure 11 .11Figure 11. Evaluation data of FS-DART from 24 viewpoints.", "figure_data": "", "figure_id": "fig_8", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 .12Figure 12. Training data for Fig. 7 of the main text. The first N samples in rows are used for the N -shot task. The textural captions are only employed by TeCH.", "figure_data": "", "figure_id": "fig_9", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 13 .13Figure 13. Evaluation data of FS-XHumans from 24 viewpoints.", "figure_data": "", "figure_id": "fig_10", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "official implementation from https://github.com/ huangyangyi/TeCH. TeCH requires 5 stages to optimize a human avatar, including VQA caption, Dream-Booth fine-tuning, geometry optimization, geometry postprocessing, and texture optimization. The captions used for text-guided SDS loss are shown in Fig 12.", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 14 .14Figure 14. The effects of normal and depth losses.", "figure_data": "", "figure_id": "fig_12", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 .Figure 16 .1516Figure 15. Comparison of body reconstruction with few-shot data", "figure_data": "", "figure_id": "fig_13", "figure_label": "1516", "figure_type": "figure" }, { "figure_caption": "1717", "figure_data": "", "figure_id": "fig_14", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "(c), some black patterns appear on the top of fingers, which is caused by shadows in the training data. That is, due to a lack of awareness of lighting, the SDS guidance tends to generate shadow-like patterns in unseen regions. To address this issue, we plan to introduce illumination-aware designs (e.g.,[7]) to the HaveFun framework.", "figure_data": "", "figure_id": "fig_15", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 17 .17Figure 17. Demonstration of limitations.", "figure_data": "", "figure_id": "fig_16", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "(b)). Additionally, artifacts such as the top of thumb could come from an inaccurate MANO/SMPLX fitting (Fig.17", "figure_data": "", "figure_id": "fig_17", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "(d)).", "figure_data": "", "figure_id": "fig_18", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Comparison of few-shot human reconstruction. For FS-Xhumans, TeCH", "figure_data": "MethodPSNR ↑SSIM ×10 2 ↑ LPIPS ×10 2 ↓FS-XHumansSelfRecon19.9/20.792.7/94.36.5/6.3TeCH21.092.46.5HaveFun (ours) 24.0/25.6/26.8 95.5/96.3/96.7 4.2/3.5/3.0FS-DARTSelfRecon20.8/21.3/21.7 92.0/92.4/92.8 9.6/9.0/8.7HaveFun (ours) 26.6/26.7/26.3 96.7/96.8/96.7 5.8/5.5/5.2", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The effects of normal and depth losses.", "figure_data": ".640.96270.0347✓25.080.96010.0352✓24.300.95810.040423.850.95750.0466", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" } ]
Xihe Yang; Xingyu Chen; Daiheng Gao; Shaohui Wang; Xiaoguang Han; Baoyuan Wang; Xiaobing Ai; Sse; Fnii; Freelancer
[ { "authors": "Rameen Abdal; Yipeng Qin; Peter Wonka", "journal": "", "ref_id": "b0", "title": "Image2StyleGAN: How to embed images into the stylegan latent space?", "year": "2019" }, { "authors": "Yukang Cao; Yan-Pei Cao; Kai Han; Ying Shan; Kwan-Yee K Wong", "journal": "", "ref_id": "b1", "title": "DreamAvatar: Text-and-shape guided 3d human avatar generation via diffusion models", "year": "2023" }, { "authors": "Connor Z Eric R Chan; Matthew A Lin; Koki Chan; Boxiao Nagano; Shalini De Pan; Orazio Mello; Leonidas Gallo; Jonathan Guibas; Sameh Tremblay; Khamis", "journal": "", "ref_id": "b2", "title": "Efficient geometry-aware 3D generative adversarial networks", "year": "2022" }, { "authors": "Yufeng Xu Chen; Zheng; J Michael; Otmar Black; Andreas Hilliges; Geiger", "journal": "", "ref_id": "b3", "title": "SNARF: Differentiable forward skinning for animating non-rigid neural implicit shapes", "year": "2021" }, { "authors": "Xingyu Chen; Yufeng Liu; Yajiao Dong; Xiong Zhang; Chongyang Ma; Yanmin Xiong; Yuan Zhang; Xiaoyan Guo", "journal": "", "ref_id": "b4", "title": "Mobrecon: Mobile-friendly hand mesh reconstruction from monocular image", "year": "2022" }, { "authors": "Xingyu Chen; Yu Deng; Baoyuan Wang", "journal": "", "ref_id": "b5", "title": "Mimic3d: Thriving 3d-aware gans via 3d-to-2d imitation", "year": "2023" }, { "authors": "Xingyu Chen; Baoyuan Wang; Heung-Yeung Shum", "journal": "", "ref_id": "b6", "title": "Hand avatar: Free-pose hand animation and rendering from monocular video", "year": "2023" }, { "authors": "Israel Cohen; Yiteng Huang; Jingdong Chen; Jacob Benesty; Jacob Benesty; Jingdong Chen; Yiteng Huang; Israel Cohen", "journal": "", "ref_id": "b7", "title": "Pearson correlation coefficient. Noise reduction in speech processing", "year": "2009" }, { "authors": "Enric Corona; Tomas Hodan; Minh Vo; Francesc Moreno-Noguer; Chris Sweeney; Richard Newcombe; Lingni Ma", "journal": "", "ref_id": "b8", "title": "LISA: Learning implicit shape and appearance of hands", "year": "2022" }, { "authors": "Enric Corona; Mihai Zanfir; Thiemo Alldieck; Eduard Gabriel Bazavan; Andrei Zanfir; Cristian Sminchisescu", "journal": "", "ref_id": "b9", "title": "Structured 3d features for reconstructing controllable avatars", "year": "2023" }, { "authors": "Kangle Deng; Andrew Liu; Jun-Yan Zhu; Deva Ramanan", "journal": "", "ref_id": "b10", "title": "Depth-supervised NeRF: Fewer views and faster training for free", "year": "2022" }, { "authors": "Yu Deng; Jiaolong Yang; Jianfeng Xiang; Xin Tong", "journal": "", "ref_id": "b11", "title": "GRAM: Generative radiance manifolds for 3D-aware image generation", "year": "2022" }, { "authors": "Yu Deng; Baoyuan Wang; Heung-Yeung Shum", "journal": "", "ref_id": "b12", "title": "Learning detailed radiance manifolds for high-fidelity and 3Dconsistent portrait synthesis from monocular image", "year": "2023" }, { "authors": "Zijian Dong; Xu Chen; Jinlong Yang; Michael J Black; Otmar Hilliges; Andreas Geiger", "journal": "", "ref_id": "b13", "title": "AG3D: Learning to Generate 3D Avatars from 2D Image Collections", "year": "2023" }, { "authors": "Ainaz Eftekhar; Alexander Sax; Jitendra Malik; Amir Zamir", "journal": "", "ref_id": "b14", "title": "Omnidata: A scalable pipeline for making multi-task mid-level vision datasets from 3d scans", "year": "2021" }, { "authors": "Daiheng Gao; Yuliang Xiu; Kailin Li; Lixin Yang; Feng Wang; Peng Zhang; Bang Zhang; Cewu Lu; Ping Tan", "journal": "NeurIPS", "ref_id": "b15", "title": "DART: Articulated hand model with diverse accessories and rich textures", "year": "2022" }, { "authors": "Jun Gao; Tianchang Shen; Zian Wang; Wenzheng Chen; Kangxue Yin; Daiqing Li; Or Litany; Zan Gojcic; Sanja Fidler", "journal": "NeurIPS", "ref_id": "b16", "title": "Get3D: A generative model of high quality 3D textured shapes learned from images", "year": "2022" }, { "authors": "Philip-William Grassal; Malte Prinzler; Titus Leistner; Carsten Rother; Matthias Nießner; Justus Thies", "journal": "", "ref_id": "b17", "title": "Neural head avatars from monocular RGB videos", "year": "2022" }, { "authors": "Chen Guo; Tianjian Jiang; Xu Chen; Jie Song; Otmar Hilliges", "journal": "", "ref_id": "b18", "title": "Vid2Avatar: 3D avatar reconstruction from videos in the wild via self-supervised scene decomposition", "year": "2023" }, { "authors": "Zhiyang Guo; Wengang Zhou; Min Wang; Li Li; Houqiang Li", "journal": "", "ref_id": "b19", "title": "HandNeRF: Neural radiance fields for animatable interacting hands", "year": "2023" }, { "authors": "Hsuan-I Ho; Lixin Xue; Jie Song; Otmar Hilliges", "journal": "", "ref_id": "b20", "title": "Learning locally editable virtual humans", "year": "2023" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "NeurIPS", "ref_id": "b21", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Fangzhou Hong; Mingyuan Zhang; Liang Pan; Zhongang Cai; Lei Yang; Ziwei Liu", "journal": "ACM TOG", "ref_id": "b22", "title": "Avatarclip: Zero-shot textdriven generation and animation of 3D avatars", "year": "2022" }, { "authors": "Yukun Huang; Jianan Wang; Ailing Zeng; He Cao; Xianbiao Qi; Yukai Shi; Zheng-Jun Zha; Lei Zhang", "journal": "", "ref_id": "b23", "title": "Dreamwaltz: Make a scene with complex 3D animatable avatars", "year": "2023" }, { "authors": "Yangyi Huang; Hongwei Yi; Weiyang Liu; Haofan Wang; Boxi Wu; Wenxiao Wang; Binbin Lin; Debing Zhang; Deng Cai", "journal": "", "ref_id": "b24", "title": "One-shot implicit animatable avatars with modelbased priors", "year": "2023" }, { "authors": "Yangyi Huang; Hongwei Yi; Yuliang Xiu; Tingting Liao; Jiaxiang Tang; Deng Cai; Justus Thies", "journal": "", "ref_id": "b25", "title": "TeCH: Text-guided reconstruction of lifelike clothed humans", "year": "2024" }, { "authors": "Mustafa Is ¸ık; Martin Rünz; Markos Georgopoulos; Taras Khakhulin; Jonathan Starck; Lourdes Agapito; Matthias Nießner", "journal": "ACM TOG", "ref_id": "b26", "title": "HumanRF: High-fidelity neural radiance fields for humans in motion", "year": "2023" }, { "authors": "Shun Iwase; Shunsuke Saito; Tomas Simon; Stephen Lombardi; Timur Bagautdinov; Rohan Joshi; Fabian Prada; Takaaki Shiratori; Yaser Sheikh; Jason Saragih", "journal": "", "ref_id": "b27", "title": "Re-lightableHands: Efficient neural relighting of articulated hand models", "year": "2023" }, { "authors": "Ajay Jain; Matthew Tancik; Pieter Abbeel", "journal": "", "ref_id": "b28", "title": "Putting nerf on a diet: Semantically consistent few-shot view synthesis", "year": "2021" }, { "authors": "Boyi Jiang; Yang Hong; Hujun Bao; Juyong Zhang", "journal": "", "ref_id": "b29", "title": "Sel-fRecon: Self reconstruction your digital avatar from monocular video", "year": "2022" }, { "authors": "Ruixiang Jiang; Can Wang; Jingbo Zhang; Menglei Chai; Mingming He; Dongdong Chen; Jing Liao", "journal": "", "ref_id": "b30", "title": "AvatarCraft: Transforming text into neural human avatars with parameterized shape and pose control", "year": "2023" }, { "authors": "Suyi Jiang; Haoran Jiang; Ziyu Wang; Haimin Luo; Wenzheng Chen; Lan Xu", "journal": "", "ref_id": "b31", "title": "HumanGen: Generating human radiance fields with explicit priors", "year": "2023" }, { "authors": "Wei Jiang; Kwang Moo Yi; Golnoosh Samei; Oncel Tuzel; Anurag Ranjan", "journal": "", "ref_id": "b32", "title": "Neuman: Neural human radiance field from a single video", "year": "2022" }, { "authors": "Korrawe Karunratanakul; Sergey Prokudin; Otmar Hilliges; Siyu Tang", "journal": "", "ref_id": "b33", "title": "Harp: Personalized hand reconstruction from a monocular rgb video", "year": "2023" }, { "authors": "Mijeong Kim; Seonguk Seo; Bohyung Han", "journal": "", "ref_id": "b34", "title": "InfoNeRF: Ray entropy minimization for few-shot neural volume rendering", "year": "2022" }, { "authors": "Nikos Kolotouros; Thiemo Alldieck; Andrei Zanfir; Eduard Gabriel Bazavan; Mihai Fieraru", "journal": "", "ref_id": "b35", "title": "DreamHuman: Animatable 3d avatars from text", "year": "2023" }, { "authors": "Junnan Li; Dongxu Li; Caiming Xiong; Steven Hoi", "journal": "", "ref_id": "b36", "title": "Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation", "year": "2022" }, { "authors": "Ruilong Li; Julian Tanke; Minh Vo; Michael Zollhöfer; Jürgen Gall; Angjoo Kanazawa; Christoph Lassner", "journal": "", "ref_id": "b37", "title": "TAVA: Template-free animatable volumetric actors", "year": "2022" }, { "authors": "Zhe Li; Zerong Zheng; Yuxiao Liu; Boyao Zhou; Yebin Liu", "journal": "", "ref_id": "b38", "title": "Posevocab: Learning joint-structured pose embeddings for human avatar modeling", "year": "2023" }, { "authors": "Tingting Liao; Hongwei Yi; Yuliang Xiu; Jiaxiang Tang; Yangyi Huang; Justus Thies; Michael J Black", "journal": "", "ref_id": "b39", "title": "Tada! text to animatable digital avatars", "year": "2023" }, { "authors": "Chen-Hsuan Lin; Jun Gao; Luming Tang; Towaki Takikawa; Xiaohui Zeng; Xun Huang; Karsten Kreis; Sanja Fidler; Ming-Yu Liu; Tsung-Yi Lin", "journal": "", "ref_id": "b40", "title": "Magic3D: High-resolution text-to-3D content creation", "year": "2023" }, { "authors": "Yukang Lin; Haonan Han; Chaoqun Gong; Zunnan Xu; Yachao Zhang; Xiu Li", "journal": "", "ref_id": "b41", "title": "Consistent123: One image to highly consistent 3d asset using case-aware diffusion priors", "year": "2023" }, { "authors": "Minghua Liu; Chao Xu; Haian Jin; Linghao Chen; Zexiang Xu; Hao Su", "journal": "", "ref_id": "b42", "title": "One-2-3-45: Any single image to 3d mesh in 45 seconds without per-shape optimization", "year": "2023" }, { "authors": "Ruoshi Liu; Rundi Wu; Basile Van Hoorick; Pavel Tokmakov; Sergey Zakharov; Carl Vondrick", "journal": "", "ref_id": "b43", "title": "Zero-1-to-3: Zero-shot one image to 3D object", "year": "2023" }, { "authors": "Yuan Liu; Cheng Lin; Zijiao Zeng; Xiaoxiao Long; Lingjie Liu; Taku Komura; Wenping Wang", "journal": "", "ref_id": "b44", "title": "SyncDreamer: Generating multiview-consistent images from a single-view image", "year": "2023" }, { "authors": "Xiaoxiao Long; Cheng Lin; Peng Wang; Taku Komura; Wenping Wang", "journal": "", "ref_id": "b45", "title": "Sparseneus: Fast generalizable neural surface reconstruction from sparse views", "year": "2022" }, { "authors": "Matthew Loper; Naureen Mahmood; Javier Romero; Gerard Pons-Moll; Michael J Black", "journal": "ACM TOG", "ref_id": "b46", "title": "SMPL: A skinned multiperson linear model", "year": "2015" }, { "authors": "Luke Melas-Kyriazi; Christian Rupprecht; Iro Laina; Andrea Vedaldi", "journal": "", "ref_id": "b47", "title": "RealFusion: 360 reconstruction of any object from a single image", "year": "2023" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "", "ref_id": "b48", "title": "NeRF: Representing scenes as neural radiance fields for view synthesis", "year": "2020" }, { "authors": "Michael Niemeyer; Jonathan T Barron; Ben Mildenhall; S M Mehdi; Andreas Sajjadi; Noha Geiger; Radwan", "journal": "", "ref_id": "b49", "title": "Reg-NeRF: Regularizing neural radiance fields for view synthesis from sparse inputs", "year": "2022" }, { "authors": "Georgios Pavlakos; Vasileios Choutas; Nima Ghorbani; Timo Bolkart; Dimitrios Ahmed Aa Osman; Michael J Tzionas; Black", "journal": "", "ref_id": "b50", "title": "Expressive body capture: 3D hands, face, and body from a single image", "year": "2019" }, { "authors": "Sida Peng; Junting Dong; Qianqian Wang; Shangzhan Zhang; Qing Shuai; Xiaowei Zhou; Hujun Bao", "journal": "", "ref_id": "b51", "title": "Animatable neural radiance fields for modeling dynamic human bodies", "year": "2021" }, { "authors": "Sida Peng; Yuanqing Zhang; Yinghao Xu; Qianqian Wang; Qing Shuai; Hujun Bao; Xiaowei Zhou", "journal": "", "ref_id": "b52", "title": "Neural body: Implicit neural representations with structured latent codes for novel view synthesis of dynamic humans", "year": "2021" }, { "authors": "Ben Poole; Ajay Jain; Jonathan T Barron; Ben Mildenhall", "journal": "ICLR", "ref_id": "b53", "title": "DreamFusion: Text-to-3D using 2D diffusion", "year": "2023" }, { "authors": "Albert Pumarola; Enric Corona; Gerard Pons-Moll; Francesc Moreno-Noguer", "journal": "", "ref_id": "b54", "title": "D-NeRF: Neural radiance fields for dynamic scenes", "year": "2021" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "", "ref_id": "b55", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Daniel Roich; Ron Mokady; H Amit; Daniel Bermano; Cohen-Or", "journal": "ACM TOG", "ref_id": "b56", "title": "Pivotal tuning for latent-based editing of real images", "year": "2022" }, { "authors": "Javier Romero; Dimitrios Tzionas; Michael J Black", "journal": "ACM TOG", "ref_id": "b57", "title": "Embodied hands: Modeling and capturing hands and bodies together", "year": "2017" }, { "authors": "Nataniel Ruiz; Yuanzhen Li; Varun Jampani; Yael Pritch; Michael Rubinstein; Kfir Aberman", "journal": "", "ref_id": "b58", "title": "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation", "year": "2023" }, { "authors": "Shunsuke Saito; Zeng Huang; Ryota Natsume; Shigeo Morishima; Angjoo Kanazawa; Hao Li", "journal": "", "ref_id": "b59", "title": "PIFu: Pixel-aligned implicit function for high-resolution clothed human digitization", "year": "2019" }, { "authors": "Shunsuke Saito; Tomas Simon; Jason Saragih; Hanbyul Joo", "journal": "", "ref_id": "b60", "title": "Pifuhd: Multi-level pixel-aligned implicit function for high-resolution 3d human digitization", "year": "2020" }, { "authors": "Ruizhi Shao; Hongwen Zhang; He Zhang; Mingjia Chen; Yan-Pei Cao; Tao Yu; Yebin Liu", "journal": "", "ref_id": "b61", "title": "DoubleField: Bridging the neural surface and radiance fields for high-fidelity human reconstruction and rendering", "year": "2022" }, { "authors": "Kaiyue Shen; Chen Guo; Manuel Kaufmann; Juan Jose Zarate; Julien Valentin; Jie Song; Otmar Hilliges", "journal": "", "ref_id": "b62", "title": "X-Avatar: Expressive human avatars", "year": "2023" }, { "authors": "Tianchang Shen; Jun Gao; Kangxue Yin; Ming-Yu Liu; Sanja Fidler", "journal": "NeurIPS", "ref_id": "b63", "title": "Deep marching tetrahedra: a hybrid representation for high-resolution 3D shape synthesis", "year": "2021" }, { "authors": "Ruoxi Shi; Hansheng Chen; Zhuoyang Zhang; Minghua Liu; Chao Xu; Xinyue Wei; Linghao Chen; Chong Zeng; Hao Su", "journal": "", "ref_id": "b64", "title": "Zero123++: A single image to consistent multi-view diffusion base model", "year": "2023" }, { "authors": "Jingxiang Sun; Xuan Wang; Lizhen Wang; Xiaoyu Li; Yong Zhang; Hongwen Zhang; Yebin Liu", "journal": "", "ref_id": "b65", "title": "Next3D: Generative neural texture rasterization for 3D-aware head avatars", "year": "2023" }, { "authors": "Junshu Tang; Tengfei Wang; Bo Zhang; Ting Zhang; Ran Yi; Lizhuang Ma; Dong Chen", "journal": "", "ref_id": "b66", "title": "Make-it-3D: High-fidelity 3D creation from a single image with diffusion prior", "year": "2023" }, { "authors": "Xiao Tang; Tianyu Wang; Chi-Wing Fu", "journal": "", "ref_id": "b67", "title": "Towards accurate alignment in real-time 3D hand-mesh reconstruction", "year": "2021" }, { "authors": "Gusi Te; Xiu Li; Xiao Li; Jinglu Wang; Wei Hu; Yan Lu", "journal": "", "ref_id": "b68", "title": "Neural capture of animatable 3D human from monocular video", "year": "2022" }, { "authors": "Duomin Wang; Yu Deng; Zixin Yin; Heung-Yeung Shum; Baoyuan Wang", "journal": "", "ref_id": "b69", "title": "Progressive disentangled representation learning for fine-grained controllable talking head synthesis", "year": "2023" }, { "authors": "Haochen Wang; Xiaodan Du; Jiahao Li; Raymond A Yeh; Greg Shakhnarovich", "journal": "", "ref_id": "b70", "title": "Score jacobian chaining: Lifting pretrained 2D diffusion models for 3D generation", "year": "2023" }, { "authors": "Shaofei Wang; Katja Schwarz; Andreas Geiger; Siyu Tang", "journal": "", "ref_id": "b71", "title": "Arah: Animatable volume rendering of articulated human SDFs", "year": "2022" }, { "authors": "Tengfei Wang; Yong Zhang; Yanbo Fan; Jue Wang; Qifeng Chen", "journal": "", "ref_id": "b72", "title": "High-fidelity GAN inversion for image attribute editing", "year": "2022" }, { "authors": "Zhou Wang; Alan C Bovik; Hamid R Sheikh; Eero P Simoncelli", "journal": "IEEE TIP", "ref_id": "b73", "title": "Image quality assessment: from error visibility to structural similarity", "year": "2004" }, { "authors": "Chung-Yi Weng; Brian Curless; P Pratul; Jonathan T Srinivasan; Ira Barron; Kemelmacher-Shlizerman", "journal": "", "ref_id": "b74", "title": "Hu-manNeRF: Free-viewpoint rendering of moving people from monocular video", "year": "2022" }, { "authors": "Chung-Yi Weng; P Pratul; Brian Srinivasan; Ira Curless; Kemelmacher-Shlizerman", "journal": "", "ref_id": "b75", "title": "PersonNeRF: Personalized reconstruction from photo collections", "year": "2023" }, { "authors": "Zhangyang Xiong; Di Kang; Derong Jin; Weikai Chen; Linchao Bao; Shuguang Cui; Xiaoguang Han", "journal": "", "ref_id": "b76", "title": "Get3DHuman: Lifting StyleGAN-Human into a 3D generative model using pixel-aligned reconstruction priors", "year": "2023" }, { "authors": "Yuliang Xiu; Jinlong Yang; Dimitrios Tzionas; Michael J Black", "journal": "", "ref_id": "b77", "title": "ICON: Implicit clothed humans obtained from normals", "year": "2022" }, { "authors": "Yuliang Xiu; Jinlong Yang; Xu Cao; Dimitrios Tzionas; Michael J Black", "journal": "", "ref_id": "b78", "title": "ECON: Explicit clothed humans optimized via normal integration", "year": "2023" }, { "authors": "Tianhan Xu; Yasuhiro Fujita; Eiichi Matsumoto", "journal": "", "ref_id": "b79", "title": "Surface-aligned neural radiance fields for controllable 3D human synthesis", "year": "2022" }, { "authors": "Wenqi Yang; Guanying Chen; Chaofeng Chen; Zhenfang Chen; K Kwan-Yee; Wong", "journal": "", "ref_id": "b80", "title": "PS-NeRF: Neural inverse rendering for multi-view photometric stereo", "year": "2022" }, { "authors": "Zhengming Yu; Wei Cheng; Xian Liu; Wayne Wu; Kwan-Yee Lin", "journal": "CVPR", "ref_id": "b81", "title": "MonoHuman: Animatable human neural field from monocular video", "year": "2023" }, { "authors": "Huichao Zhang; Bowen Chen; Hao Yang; Liao Qu; Xu Wang; Li Chen; Chao Long; Feida Zhu; Kang Du; Min Zheng", "journal": "", "ref_id": "b82", "title": "Avatarverse: High-quality & stable 3D avatar creation from text and pose", "year": "2023" }, { "authors": "Richard Zhang; Phillip Isola; Alexei A Efros; Eli Shechtman; Oliver Wang", "journal": "", "ref_id": "b83", "title": "The unreasonable effectiveness of deep features as a perceptual metric", "year": "2018" }, { "authors": "Fuqiang Zhao; Wei Yang; Jiakai Zhang; Pei Lin; Yingliang Zhang; Jingyi Yu; Lan Xu", "journal": "", "ref_id": "b84", "title": "HumanNeRF: Efficiently generated human radiance field from sparse inputs", "year": "2022" }, { "authors": "Yufeng Zheng; Victoria Fernández Abrevaya; Marcel C Bühler; Xu Chen; Michael J Black; Otmar Hilliges", "journal": "", "ref_id": "b85", "title": "I M Avatar: Implicit morphable head avatars from videos", "year": "2022" }, { "authors": "Zerong Zheng; Han Huang; Tao Yu; Hongwen Zhang; Yandong Guo; Yebin Liu", "journal": "", "ref_id": "b86", "title": "Structured local radiance fields for human avatar modeling", "year": "2022" }, { "authors": "Zerong Zheng; Xiaochen Zhao; Hongwen Zhang; Boning Liu; Yebin Liu", "journal": "ACM TOG", "ref_id": "b87", "title": "Avatarrex: Real-time expressive fullbody avatars", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 55.09, 588.28, 231.27, 11.84 ], "formula_id": "formula_0", "formula_text": "G : (R, θ, ψ) ∈ R d R × R d θ × R d ψ → Î ∈ R H×W ×3 ,(1)" }, { "formula_coordinates": [ 3, 342.87, 206.52, 202.24, 24.8 ], "formula_id": "formula_1", "formula_text": "v m = (v t a + δv t a ) • s t b -(v t b + δv t b ) • s a s b -s a ,(2)" }, { "formula_coordinates": [ 3, 346.14, 375.54, 198.97, 17.25 ], "formula_id": "formula_2", "formula_text": "z t = √ ᾱt I + (1 -ᾱt )ϵ, ϵ ∼ N (0, 1),(3)" }, { "formula_coordinates": [ 3, 345.52, 444.61, 195.72, 14.11 ], "formula_id": "formula_3", "formula_text": "min E t,ϵ ∥ϵ -ε (z t , t, CLIP(I), δR))∥ 2 2 . (4" }, { "formula_coordinates": [ 3, 541.24, 447.9, 3.87, 8.64 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 3, 379.13, 500.03, 165.99, 9.68 ], "formula_id": "formula_5", "formula_text": "I δR = Zero123(I, δR).(5)" }, { "formula_coordinates": [ 3, 335.19, 647.98, 209.92, 22.31 ], "formula_id": "formula_6", "formula_text": "∇ η L SDS ≜ E t,ϵ w(t) (ε (z t , t, y) -ϵ) ∂x ∂η ,(6)" }, { "formula_coordinates": [ 4, 54.39, 74.08, 483.8, 220.58 ], "formula_id": "formula_7", "formula_text": "! ! \" # ! \" $ ! % \"#$%& Skinning w/ # ' , $ ' Skinning w/ # ! , $ ! Skinning w/ # ' , $ ' Few-Shot Reference Skinning w/ # ! , $ ! Few-Shot Guidance % ()( % ()( Zero123 Zero123 + + + + ! ' \" # ' \" $ ' Aligned-view rendering Backpropagation Random-view rendering % \"#$%&" }, { "formula_coordinates": [ 4, 102.06, 580.32, 184.3, 33.29 ], "formula_id": "formula_8", "formula_text": "W v m = uW p v p 1 + vW p v p 2 + γW p v p 3 E v m = uE p v p 1 + vE p v p 2 + γE p v p 3 ,(7)" }, { "formula_coordinates": [ 4, 55.71, 685.79, 226.78, 30.55 ], "formula_id": "formula_9", "formula_text": "ṽm = B b=1 W v m ,b G b (θ, J)G(0, J) -1 (v m +E v m ψ), (8" }, { "formula_coordinates": [ 4, 282.49, 696.52, 3.87, 8.64 ], "formula_id": "formula_10", "formula_text": ")" }, { "formula_coordinates": [ 4, 340.74, 554.96, 172.5, 12.2 ], "formula_id": "formula_11", "formula_text": "C(P s ) : P s ∈ R H×W ×3 → Î ∈ R H×W ×3 ." }, { "formula_coordinates": [ 4, 375.43, 684.81, 165.53, 31.53 ], "formula_id": "formula_12", "formula_text": "Îalign i = G(R align i , θ i , ψ i ) Înovel i = G(R novel , θ i , ψ i ). (10" }, { "formula_coordinates": [ 4, 540.96, 696.36, 4.15, 8.64 ], "formula_id": "formula_13", "formula_text": ")" }, { "formula_coordinates": [ 5, 77.77, 146.59, 204.44, 81.22 ], "formula_id": "formula_14", "formula_text": "L texture = LPIPS( Îalign i , I i ) + || Îalign i -I i || 2 2 L normal = 1-< Nalign i , N i > L depth = cov( Dalign i ,Di) σ Dalign i σ D i L mask = || Malign i -M i || 2 2 , (11" }, { "formula_coordinates": [ 5, 282.21, 219.16, 4.15, 8.64 ], "formula_id": "formula_15", "formula_text": ")" }, { "formula_coordinates": [ 5, 123.18, 423.85, 159.04, 9.79 ], "formula_id": "formula_16", "formula_text": "L lap = ||L M • nM || 2 , (12" }, { "formula_coordinates": [ 5, 282.21, 424.31, 4.15, 8.64 ], "formula_id": "formula_17", "formula_text": ")" }, { "formula_coordinates": [ 5, 50.11, 654.46, 244.1, 23.84 ], "formula_id": "formula_18", "formula_text": "∇ η G L sds = E t,ϵ w(t) (ε (z i,t , t, CLIP(I i ), δR) -ϵ) ∂x ∂η G ." }, { "formula_coordinates": [ 14, 72.18, 74.71, 192.12, 32.87 ], "formula_id": "formula_19", "formula_text": "L normal L depth PSNR ↑ SSIM ↑ LPIPS ↓ 4-shot FS-XHumans ✓ ✓25" } ]
10.1016/j.dajour.2023.100230
2024-02-01
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b34", "b1", "b7", "b2", "b21", "b24", "b16", "b25", "b25", "b23" ], "table_ref": [], "text": "Today's deep learning model architectures are more powerful than ever and enable the use of artificial intelligence (AI) in a wide range of application areas. However, with increasing model complexity comes increasing opacity and their output is less (human-)interpretable. Therefore, it is not uncommon for large models to be regarded only as black boxes. This can be particularly problematic in safety-relevant applications as, for instance, in autonomous driving (AD) where AI models cause decisions of autonomous systems that should be trustworthy, reasonable, and explainable [35]. Thus, the field of Explainable Artificial Intelligence (XAI) [2] is of increasing interest.\nGenerally, XAI approaches for analysis of deep learning models can be categorized in model-specific and model-agnostic methods. While model-specific methods are tailored to the underlying architecture and manipulate the test model in inference and/or training, model-agnostic methods are applied in a post-hoc manner to the test model, i.e., to fully trained models. These methods have the advantage of high flexibility, since models are treated as black boxes and, thus, any model can be analyzed the same way. Hence, the interpretation or explanation results can be compared across model classes or architectures. However, a major drawback of model-agnostic XAI methods is that only the model input can be manipulated to analyze consequential output changes. Therefore, these methods are sampling-based, which leads to a high computational effort for complex models.\nIn AD, the trustworthy recognition of street scenes, especially pedestrians [8,3], is of major interest. Contemporary object detection (OD) models show good performances, but have very distinct basic architectures and working principles. [22] For pedestrian detection, a severe challenge is that commonly, pedestrians appear under occlusion so that OD models have high robustness requirements here. [25] From an XAI point of view, it is therefore of particular importance on which semantic regions a test model bases its decisions for detecting a pedestrian, regardless of the underlying test model. Hence, model-agnostic explanation models should be considered here since it enables high flexibility and comparability. Many semantic knowledge concepts that appear on pedestrians like clothing, accessories, or poses mostly coincide with specific body parts and the division into body parts is coherent among different perspectives. Only the visibility of individual body parts differs between individual pedestrian instances. Therefore, we can use body parts as semantic regions in order to profile and benchmark object detection models with each other.\nModel-agnostic explanation methods can be further distinguished into global and local explanation methods. Global methods try to explain the model on the data as a whole to interpret the overall performance, whereas local methods try to explain outputs for single data points or instances. Testing with Concept Activation Vector (TCAV) is a method that tests a model for relevant features that are given by example images [17]. A Concept Activation Vector (CAV) then quantifies the extent to which the model was activated in a prediction to a given concept. Those concepts can be, for instance, textures, color schemes, or anything that is describable by a bunch of images. Since we want to assess body parts, we do not have clear color or textures we want to focus on, and example images of isolated body parts are not available. This is why we do not focus on the TCAV in this work. Rather, we will focus on a formalism called Local Interpretable Model-agnostic Explanations (LIME) [26]. LIME is a method for local explanation of instances by introducing a surrogate model that is simpler and more interpretable than the typically complex reference model [26]. Further prerequisites established a formalism called Shapley Additive Explanations (SHAP) presented by Lundberg and Lee [24]. The approach in this work is based on KernelSHAP, a specification of SHAP that enables local model-agnostic explanations, so that we go into further details in Section 3.\nHowever, when it comes to OD problems, model-agnostic explanation methods show some shortcomings being based on input sampling. In comparison to many machine learning tasks dealing with image processing, the input dimension and typically the model size is rather low, which makes single forward passes through the model quite fast. In image processing or, particularly, OD tasks, the input, i.e., image data, is rather complex and forward passes are computationally heavier. Thus, sampling images causes lots of forward passes decelerating the model explanation substantially. Due to the drastically larger number of input dimension, even more samples are needed to gain meaningful model explanations.\nTherefore, we need to adapt sampling-based, model-agnostic explanation methods like KernelSHAP to explain the output pedestrian detection models." }, { "figure_ref": [], "heading": "Related Works in XAI", "publication_ref": [ "b36", "b37", "b29", "b27", "b35", "b11", "b17", "b26", "b13", "b38", "b6", "b25", "b3", "b31", "b8", "b12", "b18", "b33", "b32", "b32", "b9", "b15", "b22", "b28", "b14" ], "table_ref": [], "text": "In recent years, the domain of XAI has gathered a significant momentum, particularly in the field of image processing. This surge is driven by the critical need to enhance transparency, accountability, and trust in AI systems, especially those deployed in sensitive domains like healthcare, autonomous vehicles, and security. Here, we review some prominent works in the realm of XAI in image processing. The existing works in the literature can be broadly categorized into five main fields based on the type of models that has been used for the purpose: interpretable Convolutional Neural Network (CNN)s, attention mechanism based models, decision trees and rule-based models, Generative Adversarial Network (GAN)s for explainability, Case-Based Reasoning (CBR) and prototypical networks.\nInterpretable CNNs -Zeiler and Fergus introduced the concept of \"deconvolution networks\" in [37], enabling visualization of feature activations to elucidate CNN decisions. Zhang et al. in [38] proposed the concept of an interpretable CNN for better understanding of the representations of the higher convolution layers in a CNN. A special loss for each of the filters in the higher convolution layers were used so that each of these filters of an interpretable CNN corresponds to a distinct part of the object. It mitigated the need for manual object part annotations, which are often unavailable in real datasets. Selvaraju et al. proposed Gradient-weighted Class Activation Mapping (Grad-CAM) [30], facilitating the localization of discriminative regions in images influencing CNN predictions. By displaying the input regions that are \"important\" for predictions by heatmaps, it increased the transparency of CNN-based models by providing visual explanations. But the application of the previously mentioned methods were limited to visual explanations and could not capture more complex decision making factors which were not specific to a particular region of an object (e.g., properties of the scene like weather of the outdoor scene) [28].\nAttention mechanism -Xu et al. pioneered the application of attention mechanisms in image captioning in [36], allowing networks to focus on salient image regions during prediction. Fukui et al. extended this idea with Attention Branch Network (ABN) [12], augmenting CNNs with attention modules to enhance interpretability. In the previously mentioned works, the learned weights were used to display the attended regions of an image or text which was used to verify the mechanism they were designed to employ. Kim et al. show in [18] that Hadamard product in multimodal deep networks implicitly carries on an attention mechanism for visual inputs. They demonstrate how the Hadamard product in multimodal deep networks takes into account both visual and textual inputs simultaneously by using a gradient-based visualization technique and has a superior performance as compared to the respective learned attention weights [27]. But using attention mechanism for images can be often computationally expensive, thus having a limitation on its scalability [14].\nDecision Trees and rule-based models -Zhang et al. introduced decision tree guided CNNs in [39], integrating decision trees with CNNs to provide explicit reasoning for classification scores. They suggested learning a decision tree which provided a semantic explanation for each prediction given by the CNN. The feature representations in higher convolution layers are broken down into fundamental concepts of the object parts by the decision tree. In this manner, it indicates which part of the object activate which prediction filters, as well as the relative contribution of each object part to the prediction score. The decision tree explains CNN predictions at various fine-grained levels by arranging all possible choice made in a coarse-to-fine order. These semantic justifications for CNN predictions have enhanced importance that is not just limited to the conventional pixel-level analysis of CNNs. But such an explanation method is dependent on the model. Though some model-specific explanation techniques may be more helpful in certain situations for a given model than model-agnostic techniques, but the latter is more scalable. It has the benefit of being totally independent to the model, retaining the ability to apply these techniques in whole different use cases where the predictive model is different [7]. Also, decision trees might lack the expressive power required to capture complex patterns in high dimensional data like images. Linear regression models are a type of rule-based models that defines a relationship between the inputs and the outputs of the model by fitting a linear equation to the observed data. The LIME [26] method, that is mentioned already in Section 1, uses surrogate linear models for the explainability of the black-box model.\nGANs for explainability -The majority of interpretable GAN applications [4,32,9] at this time deal with creating and altering images. These applications are limited by the kinds of datasets and resources that can be used to train GAN, as well as the application scenarios and techniques that are needed for specific tasks. Interpretable techniques therefore have a limited degree of generalization. Certain high-risk domains, like software and intrusion detection, malicious speech, disease diagnosis, and mortality predictions, involve less GAN application. There is also a lack of a cohesive and consistent interpretable framework [13,19] in the research of GAN interpretability, and it is heavily dependent on particular problems, task scenarios, and models, leading to a low level of universality for interpretable approaches. Increasing model transparency is necessary to investigate the interpretability of GAN models. Privacy protection is at risk if the data is transparent. Therefore, a major issue for the current GAN interpretable approaches is to improve the interpretation effect in certain high risk applications like healthcare, security, autonomous driving while maintaining the security of GAN models and data privacy. [34] CBR and prototypical networks -Prototypical Networks have several important advantages. What distinguishes them is their capacity to generalize well from small amounts of labelled input, which makes them especially useful in few-shot learning contexts [33]. In situations where data scarcity presents a difficulty, Prototypical Networks perform better than standard models, which call for large amount of labeled data. Furthermore, prototype-based method supports strong classification, improving generalization across different domains. These networks' ability to quickly adjust to new data is another benefit of the iterative learning mechanism, which strengthens their standing as flexible and adaptive machine learning models. Prototypical networks do have certain drawbacks, though. The use of embedding space and distance metrics, which cannot always adequately portray the intricate relationships between data points, is one main area of concern. The representativeness and quality of the labeled training data can also have an impact on how effective these networks are. Furthermore, Prototypical Networks perform best in few-shot scenarios [33], but may struggle to define prototypes in use-cases with a high degree of complexity or diversity of classes [10]. CBR-CNN combines CNNs for feature extraction with CBR for decision-making in image classification. It begins by extracting features from input images using a CNN, then retrieves similar cases from a database based on these features [16]. The final classification decision is made by aggregating the classifications of retrieved cases. This approach leverages both the deep learning capabilities of CNNs and the knowledge-driven reasoning of CBR to enhance the interpretability and performance of image classification systems [23]. But, these approaches rely on a diverse and representative case database for generalization, increased computational complexity due to hybrid architecture, and sensitivity to retrieval mechanisms [29]. Capturing complex semantic relationships, dependency on annotated data, and adaptability to dynamic environments pose additional challenges [15]." }, { "figure_ref": [ "fig_0" ], "heading": "Materials and Methods", "publication_ref": [], "table_ref": [], "text": "We now shed light on how our approach to model-agnostic body part relevance assessment is structured. Figure 1 outlines the concept from the input street scene image to the so-called relevance maps. The details about the individual modules shown in this sketch are explained in the next sections." }, { "figure_ref": [], "heading": "Shapley Additive Explanations (SHAP)", "publication_ref": [ "b23", "b30", "b20", "b23", "b0" ], "table_ref": [], "text": "We already mentioned the LIME method briefly in Section 1. In this paragraph, we want to go into further details in order to shed light on the explanation procedure and how it is connected to SHAP. \nwhere L is the loss over a sample set in the interpretable space given f as the original model, g as the local explanation model in a set G of possible models, and π x ′ as a proximity measure, or kernel, between local instances. The surrogate model g can be chosen arbitrarily, which allows a lot of freedom in modeling but could end up in a surrogate model that is not human-interpretable. Therefore, a penalty Ω(g) is added to the loss function to avoid unnecessary complex surrogates.\nWith their approach called SHAP, Lundberg and Lee [24] proposed a set of method that should unify explanation approaches by incorporating properties of so-called Shapley values [31,21] into LIME. Shapley values originate from game theory and are well-defined and theoretically based measures for feature contribution to a certain outcome. The formalism of Shapley values can be incorporated in a linear LIME model by a so-called \"Shapley kernel\" to calculate the contribution of each feature to the output. This approach is called KernelSHAP. Skipping over some particulars that are present in the initial work [24], the Shapley kernel sets the terms of equation (1) as\nΩ(g) = 0\n(2)\nπ x ′ (z ′ ) = M -1 M |z ′ | |z ′ |(M -|z ′ |)(3)\nL(f, g, π x ′ ) = z ′ ∈Z f (h -1 x (z ′ )) -g(z ′ ) 2 π x ′ (z ′ )(4)\nwhere h x is a mapping between the complex and the explanation model, i.e., it is g(x ′ ) = f (x) when x = h x (x ′ ). The M input features z ′ ∈ 0, 1 M are binary (\"present\" or \"absent\"), so that |z ′ | denotes the number of present features. Furthermore, the KernelSHAP method preserves that the Shapley values can be solved by linear regression without special restrictions on the original model. Thus, it is model-agnostic." }, { "figure_ref": [ "fig_1" ], "heading": "Superpixel Model", "publication_ref": [ "b4" ], "table_ref": [], "text": "In image processing like OD, the input size is typically much larger than for other machine learning tasks. Those large input sizes make sample based analyses mostly infeasible due to the large combinatorial space. This is why the input size has to be drastically reduced in order to have efficient sampling. Moreover, the contribution of a single pixel to the actual detection can be considered to be negligibly small. Thus, a commonly used trick is to summarize a region of image pixels as a so-called superpixel. One way would be a fixed tiling into rectangular or quadratic superpixels, ignoring the actual image content. The other way is to define superpixels by semantic regions with similar texture, color, shape, or, in our case of pedestrian detection, body parts. In contrast to the fixed tiling, the semantic regions usually have different sizes.\nThe superpixels now serve as the mapping h x between the large pixel and the smaller superpixel input space. Based on that, the KernelSHAP method estimates the attribution of the input features to the output. Thus, we need to parametrize the superpixels by feature values. Our superpixel model, which serves as the explainable surrogate model, should have interpretable feature values. As we want to assess the relevance of body parts to the pedestrian detection, the feature values should represent the degree of information that is visible in the respective superpixel. Therefore, we introduce a presence value π i for each superpixel i. A value of π i = 1 means that the i-th superpixel is fully visible, as in the original input image. With decreasing presence value π i → 0, the superpixel gets increasingly hidden. In this work, we use three methods to hide the information of the superpixel. The first method is to overlay the superpixel with noise sampled from the information of the remaining image by a multinomial normal distribution given by the RGB information. The second method overlays the superpixel with noise sampled from the information of the neighboring superpixel contents. The third method is to remove all superpixels by a contentaway inpaint method implemented in the OpenCV library [5]. A presence value of π i = 0 means, that only the overlay is visible in the image, i.e., the superpixel information is completely hidden.\nThus, our superpixel model for the body part relevance assessment gets a presence vector ⃗ π ∈ [0, 1] k for k visible body parts as an input and samples an image based on this vector. This image is forwarded to the black-box OD model that should be analyzed. Figure 2 shows our three masking methods in the case of fully hidden body parts, i.e., ⃗ π = ⃗ 0.\noriginal image inpaint neighbor noise image noise The typical output of an OD model are labels, bounding box (bbox) coordinates, and classification scores. The number of those elements is dependent from the number of detected objects in the input image. Thus, we need to formalize the detection quality of a distinct pedestrian of interest among multiple possible detections with multiple bboxes and scores. For pure classification, the classification score would be enough, but for OD, it is desirable for a detection quality score to include information about the precision of the bbox, as well. Therefore, we calculate the Sørensen-Dice Coefficient (DICE) between our ground truth bbox and all detection's bboxes defined by\nDICE(A, B) = 2|A ∩ B| |A| + |B| ,(5)\nwhere A and B are the two bboxes of interest. We identify the correct bbox by the maximum DICE with the ground truth bbox G. To include also the pure classification quality, we multiply this value with the respective classification score c for the detection. Thus, our detection quality q p of a pedestrian p with detected bounding box P is\nq p = DICE(P, G) • c p .(6)\nSince DICE and c p are values in the interval [0, 1], it is\nq p ∈ [0, 1].\nAll in all, we now wrapped our OD model into a surrogate superpixel model, with an input vector and an output scalar." }, { "figure_ref": [ "fig_3" ], "heading": "Body Part Segmentation", "publication_ref": [ "b39", "b5", "b0" ], "table_ref": [], "text": "In order to introduce the superpixel model parametrization to our pedestrian detection model, we need to get a segmentation of the body parts. For the currently available large-scale pedestrian datasets like CityPersons [40] or EuroCity Persons [6], proper body part segmentations are not available. Thus, we utilize BodyPix, a trained model for body segmentation [1]. BodyPix enables us to have vast amounts of real world pedestrian data. However, two major drawbacks are that the segmentation quality is rather low if the pedestrians resolution is low, i.e., for pedestrians appearing far away in the image. The other major drawback is that there is no instance segmentation available, which means that for pedestrian groups or multiple pedestrians in one bbox, we can only access the same body parts of all pedestrians at one time. At least, we can reduce the impact of the resolution problem by focusing our relevance assessment only on the biggest pedestrians, measured by bbox area, in the dataset of interest. By default, BodyPix segments 24 different body parts, including front and back parts. We can simplify our analysis by introducing 3 further mappings, where body parts are unified. We call those mappings abstraction levels, where level 0 is the original BodyPix output. The granularity reduces with ascending level number. The mappings are shown in Figure 3. " }, { "figure_ref": [ "fig_4" ], "heading": "From Sampling to Local Explanation", "publication_ref": [ "b23", "b10" ], "table_ref": [], "text": "In KernelSHAP, one first defines an input and a baseline. The input is the instance to explain, so, in our case, the visible pedestrian, i.e., we set ⃗ π input = ⃗ 1 as the input. As the baseline, we set a completely absent or hidden pedestrian, thus it is ⃗ π baseline = ⃗ 0. The sampling of the binary perturbation is weighted with the Shapley kernel and feature attribution values are calculated using weighted linear regression. [24] In this work, we will call those attribution values (body part) relevance scores.\nAs mentioned, KernelSHAP perturbs the instance by masking features, so that all body parts can be absent or present and, hence, it does not consider our still possible partly presences with 0 < π i < 1. This is due to the Shapley-conform weighting kernel definition in Equation ( 3) that only considers binary values. Therefore, we introduce a second custom sampling and explanation method using continuous sampling and, as well as KernelSHAP, linear regression to get the scores, but without following the Shapley properties. A uniform sampling of the presence values would end up in many \"blended\" body parts which is rather unrealistic. This is why we use a distribution that concentrates on values near 0 and 1. One distribution that has this property is the Beta distribution\nB(x, α, β) = Γ (α + β) Γ (α)Γ (β) x α-1 (1 -x) β-1(7)\nwith the so-called concentration coefficients α and β. By deliberately choosing proper values for α and β, we can not only steer the concentration strength to the boundaries, but also the expectation value. Without loss of generality, we choose α = 0.2 and β = 0.1 resulting in a distribution that is concentrated on its limits at 0 and 1 with a slightly stronger concentration on 1. The expectation value results in an average pedestrian visibility of about 67 %. An expectation value above 50 % makes sense in our use case since otherwise, the pedestrian might often be not detected at all, resulting in a detection quality of 0. Thus, if too many generated samples end up in non-recognitions, we will not get insights in the relevance of body parts and the sampling becomes inefficient. Figure 4 shows a plot of the presence vector sampling probability density function (pdf) of our custom method. Another reason to concentrate the pdf on the limits is to have a robust linear regression even with a low amount of samples due to many data points at the outermost regions of the regression domain. This is also why too high visibility expectation values are counterproductive as well, since the model will probably detect all sampled instances, and we have fewer counterexamples to get insights into the prediction boundaries of the test model.\nOnce the presence vectors are sampled and propagated through the superpixel and the OD model, we have the corresponding pedestrian detection quality scores and can calculate our body part relevance scores for both explanation methods by linear regression. The relevance scores can be visualized by the body part shapes with colors representing the respective relevance scores. We call those visualization relevance maps. Furthermore, we estimate the error of the relevance 7)), the presence vectors of our sampling method are drawn from. The red dashed line shows the expectation value (mean) of the distribution. scores of our method by performing 4 independent regressions with a subset of 75 % of all data points. For each regression, we draw a different random subset. This method is commonly called \"bootstrapping\" [11]. Means and standard deviations (stds) of those fits yield the relevance scores and errors, respectively.\nAs stated already, our sampling based method is, if at all, just an approximation of the Shapley kernel, but it enables the generation of potentially more visible and realistically occluded pedestrian instances. In the following experiments, we will evaluate whether we can approximate the KernelSHAP results with our sampling approach, and if so, whether our method can approximate the Shapley values with fewer samples than KernelSHAP." }, { "figure_ref": [], "heading": "Experiments and Results", "publication_ref": [ "b19", "b5" ], "table_ref": [], "text": "In this section, we will perform a few experiments about comparability between KernelSHAP and our Beta sampling method. Additionally, since the number of samples is the crucial parameter that impacts the evaluation speed of both methods, we observe the stability of the relevance scores under small sample sizes. As a test model, we use a RetinaNet50 [20] object detection model trained on pedestrians from the EuroCity Persons [6] dataset. Since we could use any model, the training details do not matter here." }, { "figure_ref": [], "heading": "Local Explanations", "publication_ref": [ "b0" ], "table_ref": [], "text": "We evaluate KernelSHAP and our method by using our superpixel model for an example image from the EuroCity Persons dataset. For both methods, 2048 samples were drawn. In this case, the superpixel model uses the inpaint method to hide the body parts. The original image, segmentation map and the resulting relevance maps are shown in Figure 5. In addition, we show the error map of our Beta sampling method. Fig. 5: Exemplary body part segmentation by BodyPix [1] and corresponding body part relevance maps of KernelSHAP (middle plot) and our sampling method (second from right). Additionally, an error map for our method is shown in the rightmost plot.\nWe notice that the relevance maps calculated by KernelSHAP and our method are similar. Nevertheless, they show some minor differences. One problem in XAI is that there is no \"ground truth\" explanation, especially not for model-agnostic methods. Thus, we treat the KernelSHAP results as the standard and try to compare our results with it because KernelSHAP has a heavier game theoretical basement due to the Shapley formalism." }, { "figure_ref": [ "fig_5" ], "heading": "\"Global\" Explanations", "publication_ref": [], "table_ref": [], "text": "KernelSHAP and our method are, per se, local explanation methods. Nevertheless, it could be interesting to investigate, how the model under investigation behaves generally on the majority of (pedestrian) instances. An easy way to do this is to analyze a representative selection of pedestrian instances and average their relevance scores for each body part, where fully occluded body parts are ignored. In our experiments, we take the biggest pedestrians regarding bbox area in the dataset of choice. In AD street scene datasets, pedestrians are usually quite small, i.e., having a low resolution, so that the segmenting capabilities of BodyPix are even more limited. If high-resolution data is available, also different selection might make sense, e.g., average the biggest, intermediate, and smallest instances separately. To visualize the results intuitively, we color-code the respective body parts by their average relevance scores in a pictogram of a human body. An example is shown in Figure 6. Note that is not a global explanation strategy in the proper sense, which is the reason for the inverted commas in this section's title." }, { "figure_ref": [ "fig_7" ], "heading": "Efficient Sampling", "publication_ref": [], "table_ref": [], "text": "In this experiment, we now want to see, how many samples we need at least, to get a fairly stable relevance score determination. We perform these experiments by using the first and third abstraction degree of body parts (see we use powers of 2 from 8 to 4096. In order to also cover, how the methods perform for different pedestrians, we, again, perform the sampling on the 100 biggest pedestrians in the EuroCity Persons dataset regarding bbox area. Among those, 2 could not be segmented properly, so that 98 pedestrians contribute in the final results shown in Figure 7. In both abstraction degrees, body parts that do not undergo a merging with other body parts, namely face and torso, have agreeing relevance scores. A remarkable fact is, that the relevance scores differ among the masking methods. For instance, the torso has a significantly higher relevance score for the image noise masking than for the inpaint masking. However, comparing KernelSHAP with our Beta sampling method, we observe that Beta sampling yields more stable results. At 64 samples per pedestrian, the Beta sampling already gives results comparable to the higher sampling sizes. KernelSHAP, however, needs more samples to give converging relevance scores, if they converge at all. Conclusively, all experiment show, that our test model mainly focuses on torso and face regions which means that the clear presence or visibility of torso and head mainly drives the pedestrian detection quality. " }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b23" ], "table_ref": [], "text": "As already mentioned, KernelSHAP and our Beta sampling method yield comparable relevance scores. This makes sense by looking at the similar sampling properties. The Shapley kernel prefers samples with either very few or very many visible body parts, as shown in [24]. Even if the Beta sampling does not follow the Shapley properties exactly, the pdf is concentrated on 0 and 1 and, thus, samples are similar but with the difference of being non-binary. Therefore, we could say that the Beta sampling method is a continuation, or interpolation, of the Shapley kernel sampling.\nThe experiments show that our Beta sampling method requires fewer samples for robust relevance score assessment than KernelSHAP. Note that the two introduced methods in this work are local explanation methods per se. In order to gain insights into the global explainability of the test model, many local evaluations must be carried out, as we did in the experiments with many pedestrian instances. Thus, our method enables time-efficient analysis for large-scale datasets.\nNevertheless, a shortcoming in this work is the usage of the BodyPix body part segmentation model for the pedestrian detection. BodyPix is mainly used for high-resolution footage of human bodies. However, in street scene data, pedestrians are usually quite far away and, thus, have bad resolutions. Therefore, BodyPix can hardly segment proper body parts for those pedestrians. Additionally, BodyPix cannot discriminate different pedestrian instances, which is problematic in pedestrian detection since pedestrian occur in groups quite often and bboxes overlap. This is why this work has to be seen as a proof-of-concept for the pedestrian detection use case. It is desirable to use our methods with datasets having available proper body parts and instance segmentation maps. To our knowledge, there is currently no such large-scale street scene dataset available. However, our method could be applied to other tasks concerning (street) scene understanding. The most critical bottleneck is the availability of labels but, in case of uncertainty, one could also stick to fixed image regions like rectangular shaped superpixels. This could be also an approach if the semantic connected between image regions is not as clear, as in the case of, for instance, body parts of pedestrians." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "Our work demonstrates, that KernelSHAP can be adapted to OD use cases. Moreover, the robustness can be increased by using non-binary sampling that is still similar to the Shapley kernel sampling. Our sampling method approximates the Shapley values using fewer samples than KernelSHAP which make evaluation of large-scale object detection on large-scale datasets more efficient. With specific reference to our application of pedestrian detection, it must be noted that BodyPix can only be used to a very limited extent for street scene shots due to the low resolution of the pedestrians. A possible starting point for further research would therefore be the use of simulation data, for which detailed semantic and instance segmentation maps are possibly rather available. Additionally, simulation data can further enrich the analysis by considering attributes beyond body parts like accessories or vehicles like bikes, wheelchairs, buggies, etc. Simulations also enable to gain data tailored to answer specific questions or scenarios that rarely appear in real-world data. " } ]
Model-agnostic explanation methods for deep learning models are flexible regarding usability and availability. However, due to the fact that they can only manipulate input to see changes in output, they suffer from weak performance when used with complex model architectures. For models with large inputs as, for instance, in object detection, sampling-based methods like KernelSHAP are inefficient due to many computation-heavy forward passes through the model. In this work, we present a framework for using sampling-based explanation models in a computer vision context by body part relevance assessment for pedestrian detection. Furthermore, we introduce a novel sampling-based method similar to KernelSHAP that shows more robustness for lower sampling sizes and, thus, is more efficient for explainability analyses on large-scale datasets.
Model-agnostic Body Part Relevance Assessment for Pedestrian Detection
[ { "figure_caption": "Fig. 1 :1Fig. 1: Concept overview of our approach to model-agnostic body part relevance assessment.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 :2Fig. 2: Comparison of our masking methods demonstrated on a pedestrian image from the EuroCity Persons dataset [6].", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "(a) Level 0. This is the original output of the BodyPix model. It has in total 24 body parts including front and back for the arm, leg, and torso parts. The orientations are w.r.t. the ego perspective. (b) Level 1. In tiis first abstraction level, the two face halves are unified. Additionally, there is no differentiation of front and back parts any more. Overall, this results in 14 body parts. (c) Level 2. In this second abstraction level, the upper and lower parts of arms and legs are unified, as well, resulting in 10 remaining body parts. (d) Level 3. In this third abstraction level, hands are unified with the arms and feet are unified with the legs resulting in 6 remaining body parts.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 3 :3Fig. 3: Abstraction levels of our body part segmentation. The levels represent the granularity from detailed (a) to less detailed (d).", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig.4: Plot of the Beta distribution (Equation (7)), the presence vectors of our sampling method are drawn from. The red dashed line shows the expectation value (mean) of the distribution.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 6 :6Fig. 6: Example of body part relevance maps for \"global\" model explanation. Since we have multiple instances now, a human pictogram with color-coded body parts serves as a relevance map.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "masking, abstraction lv. 1", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 7 :7Fig.7: Results of the sampling experiment for abstraction levels 1 and 3, image noise and inpaint masking, and KernelSHAP and our custom Beta sampling method. For each sampling size, the solid lines are the mean relevance scores for the biggest 100 pedestrians in the EuroCity Persons dataset. Transparent bands show the respective stds of the means.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "ABN Attention Branch Network AD autonomous driving AI artificial intelligence bbox bounding box CAV Concept Activation Vector CNN Convolutional Neural Network Grad-CAM Gradient-weighted Class Activation Mapping GAN Generative Adversarial Network CBR Case-Based Reasoning DICE Sørensen-Dice Coefficient LIME Local Interpretable Model-agnostic Explanations OD object detection pdf probability density function SHAP Shapley Additive Explanations std standard deviation TCAV Testing with Concept Activation Vector XAI Explainable Artificial Intelligence", "figure_data": "", "figure_id": "tab_1", "figure_label": "7", "figure_type": "table" } ]
Maurice Günder; Sneha Banerjee; Rafet Sifa; Christian Bauckhage
[ { "authors": " Bodypix", "journal": "", "ref_id": "b0", "title": "", "year": "" }, { "authors": "A ; S ; R ; S ", "journal": "Decision Analytics Journal", "ref_id": "b1", "title": "A systematic review of explainable artificial intelligence models and applications: Recent developments and future trends", "year": "2023" }, { "authors": "S Bali; S S Tyagi", "journal": "International Journal of Advanced Studies of Scientific Research", "ref_id": "b2", "title": "A review of vision-based pedestrian detection techniques", "year": "2018-01-14" }, { "authors": "D Bau; J Y Zhu; H Strobelt; B Zhou; J B Tenenbaum; W T Freeman; A Torralba", "journal": "", "ref_id": "b3", "title": "Gan dissection: Visualizing and understanding generative adversarial networks", "year": "2018" }, { "authors": "G Bradski", "journal": "Dr. Dobb's Journal of Software Tools", "ref_id": "b4", "title": "The OpenCV Library", "year": "2000" }, { "authors": "M Braun; S Krebs; F B Flohr; D M Gavrila", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b5", "title": "Eurocity persons: A novel benchmark for person detection in traffic scenes", "year": "2019" }, { "authors": "D V Carvalho; E M Pereira; J S Cardoso", "journal": "Electronics", "ref_id": "b6", "title": "Machine learning interpretability: A survey on methods and metrics", "year": "2019" }, { "authors": "W Chen; Y Zhu; Z Tian; F Zhang; M Yao", "journal": "Array", "ref_id": "b7", "title": "Occlusion and multi-scale pedestrian detection a review", "year": "2023" }, { "authors": "X Chen; Y Duan; R Houthooft; J Schulman; I Sutskever; P Abbeel", "journal": "Advances in neural information processing systems", "ref_id": "b8", "title": "Infogan: Interpretable representation learning by information maximizing generative adversarial nets", "year": "2016" }, { "authors": "O Davoodi; S Mohammadizadehsamakosh; M Komeili", "journal": "Scientific Reports", "ref_id": "b9", "title": "On the interpretability of part-prototype based classifiers: a human centric analysis", "year": "2023" }, { "authors": "B Efron; R J Tibshirani", "journal": "CRC press", "ref_id": "b10", "title": "An introduction to the bootstrap", "year": "1994" }, { "authors": "H Fukui; T Hirakawa; T Yamashita; H Fujiyoshi", "journal": "", "ref_id": "b11", "title": "Attention branch network: Learning of attention mechanism for visual explanation", "year": "2019" }, { "authors": "A Genovese; V Piuri; F Scotti", "journal": "IEEE", "ref_id": "b12", "title": "Towards explainable face aging with generative adversarial networks", "year": "2019" }, { "authors": "Q Hou; D Zhou; J Feng", "journal": "", "ref_id": "b13", "title": "Coordinate attention for efficient mobile network design", "year": "2021" }, { "authors": "M T Keane; E M Kenny", "journal": "Springer", "ref_id": "b14", "title": "How case-based reasoning explains neural networks: A theoretical analysis of xai using post-hoc explanation-by-example from a survey of ann-cbr twin-systems", "year": "2019" }, { "authors": "M J Khan; H Hayat; I Awan", "journal": "Human-centric Computing and Information Sciences", "ref_id": "b15", "title": "Hybrid case-base maintenance approach for modeling large scale case-based reasoning systems", "year": "2019" }, { "authors": "B Kim; M Wattenberg; J Gilmer; C Cai; J Wexler; F Viegas; R Sayres", "journal": "", "ref_id": "b16", "title": "Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav)", "year": "2018" }, { "authors": "J H Kim; B T Zhang", "journal": "", "ref_id": "b17", "title": "Visual explanations from hadamard product in multimodal deep networks", "year": "2017" }, { "authors": "H Li; Y Lin; K Mueller; W Xu", "journal": "Springer", "ref_id": "b18", "title": "Interpreting galaxy deblender gan from the discriminator's perspective", "year": "2020" }, { "authors": "T Y Lin; P Goyal; R Girshick; K He; P Dollár", "journal": "", "ref_id": "b19", "title": "Focal loss for dense object detection", "year": "2017" }, { "authors": "S Lipovetsky; M Conklin", "journal": "Applied Stochastic Models in Business and Industry", "ref_id": "b20", "title": "Analysis of regression in game theory approach", "year": "2001-10" }, { "authors": "L Liu; W Ouyang; X Wang; P Fieguth; J Chen; X Liu; M Pietikäinen", "journal": "International Journal of Computer Vision", "ref_id": "b21", "title": "Deep learning for generic object detection: A survey", "year": "2019-10" }, { "authors": "A Louati; H Louati; Z Li", "journal": "The Journal of Supercomputing", "ref_id": "b22", "title": "Deep learning and case-based reasoning for predictive and adaptive traffic emergency management", "year": "2021" }, { "authors": "S Lundberg; S I Lee", "journal": "", "ref_id": "b23", "title": "A unified approach to interpreting model predictions", "year": "2017" }, { "authors": "C Ning; L Menglu; Y Hao; S Xueping; L Yunhong", "journal": "Complex &; Intelligent Systems", "ref_id": "b24", "title": "Survey of pedestrian detection with occlusion", "year": "2020-10" }, { "authors": "M T Ribeiro; S Singh; C Guestrin", "journal": "", "ref_id": "b25", "title": "Explaining the predictions of any classifier", "year": "2016" }, { "authors": "N Rodis; C Sardianos; G T Papadopoulos; P Radoglou-Grammatikis; P Sarigiannidis; I Varlamis", "journal": "", "ref_id": "b26", "title": "Multimodal explainable artificial intelligence: A comprehensive review of methodological advances and future research directions", "year": "2023" }, { "authors": "C Rudin; C Chen; Z Chen; H Huang; L Semenova; C Zhong", "journal": "Statistic Surveys", "ref_id": "b27", "title": "Interpretable machine learning: Fundamental principles and 10 grand challenges", "year": "2022" }, { "authors": "G Safa; D Akila; M H Farida", "journal": "", "ref_id": "b28", "title": "A survey on hybrid case-based reasoning and deep learning systems for medical data classification", "year": "2022" }, { "authors": "R R Selvaraju; A Das; R Vedantam; M Cogswell; D Parikh; D Batra", "journal": "", "ref_id": "b29", "title": "Gradcam: Why did you say that?", "year": "2016" }, { "authors": "L S Shapley", "journal": "RAND Corporation", "ref_id": "b30", "title": "A Value for N-Person Games", "year": "1952" }, { "authors": "Y Shen; C Yang; X Tang; B Zhou", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b31", "title": "Interfacegan: Interpreting the disentangled face representation learned by gans", "year": "2020" }, { "authors": "J Snell; K Swersky; R Zemel", "journal": "Advances in neural information processing systems", "ref_id": "b32", "title": "Prototypical networks for few-shot learning", "year": "2017" }, { "authors": "S Wang; C Zhao; L Huang; Y Li; R Li", "journal": "Computational Intelligence", "ref_id": "b33", "title": "Current status, application, and challenges of the interpretability of generative adversarial network models", "year": "2023" }, { "authors": "M Wäschle; F Thaler; A Berres; F Pölzlbauer; A Albers", "journal": "Frontiers in Artificial Intelligence", "ref_id": "b34", "title": "A review on ai safety in highly automated driving", "year": "2022" }, { "authors": "K Xu; J Ba; R Kiros; K Cho; A Courville; R Salakhudinov; R Zemel; Y Bengio", "journal": "PMLR", "ref_id": "b35", "title": "Show, attend and tell: Neural image caption generation with visual attention", "year": "2015" }, { "authors": "M D Zeiler; R Fergus", "journal": "Springer", "ref_id": "b36", "title": "Visualizing and understanding convolutional networks", "year": "2014" }, { "authors": "Q Zhang; Y N Wu; S C Zhu", "journal": "", "ref_id": "b37", "title": "Interpretable convolutional neural networks", "year": "2018" }, { "authors": "Q Zhang; Y Yang; H Ma; Y N Wu", "journal": "", "ref_id": "b38", "title": "Interpreting cnns via decision trees", "year": "2019-06" }, { "authors": "S Zhang; R Benenson; B Schiele", "journal": "", "ref_id": "b39", "title": "Citypersons: A diverse dataset for pedestrian detection", "year": "2017" } ]
[ { "formula_coordinates": [ 7, 231.88, 141.25, 39.32, 8.74 ], "formula_id": "formula_1", "formula_text": "Ω(g) = 0" }, { "formula_coordinates": [ 7, 223.93, 155.63, 256.66, 26.42 ], "formula_id": "formula_2", "formula_text": "π x ′ (z ′ ) = M -1 M |z ′ | |z ′ |(M -|z ′ |)(3)" }, { "formula_coordinates": [ 7, 205.66, 186.15, 274.94, 24.17 ], "formula_id": "formula_3", "formula_text": "L(f, g, π x ′ ) = z ′ ∈Z f (h -1 x (z ′ )) -g(z ′ ) 2 π x ′ (z ′ )(4)" }, { "formula_coordinates": [ 8, 251.8, 520.78, 228.8, 22.31 ], "formula_id": "formula_4", "formula_text": "DICE(A, B) = 2|A ∩ B| |A| + |B| ,(5)" }, { "formula_coordinates": [ 8, 260.31, 615.01, 220.28, 9.65 ], "formula_id": "formula_5", "formula_text": "q p = DICE(P, G) • c p .(6)" }, { "formula_coordinates": [ 8, 377.97, 632.21, 43.92, 9.65 ], "formula_id": "formula_6", "formula_text": "q p ∈ [0, 1]." }, { "formula_coordinates": [ 10, 222.96, 359.67, 257.63, 22.31 ], "formula_id": "formula_7", "formula_text": "B(x, α, β) = Γ (α + β) Γ (α)Γ (β) x α-1 (1 -x) β-1(7)" } ]
10.1038/75556
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b3", "b4", "b5", "b7" ], "table_ref": [], "text": "The distinction between continuants (aka endurants) and occurrents (aka perdurants) is widely accepted in many upper/foundational ontologies. The basic idea is that continuants are entities that persist in time and that can undergo changes, whereas occurrents are entities that unfold themselves in time and that can be changes of continuants. Examples of continuants include material objects (e.g. molecules, people, and planets) and properties in the broad sense of the term (e.g. the color of this apple and the fragility of glass). Paradigmatic examples of such occurrents are often grouped under the heading of \"process\" or \"event\": cell division, the life of this person, and the earth orbiting around the sun, for instance. There is a growing demand for a solid ontology of such occurrents, as is illustrated by the fact that, besides molecular function and cellular component, biological process is one of the three principal categories in the Gene Ontology (GO) [1].\nIn this paper we will explore an ontology of processes compatible with the foundational framework of Basic Formal Ontology (BFO) [2][3] [4]. We use the term \"process\" in the BFO sense of the term throughout the paper (see Section 7 for a radically different view of processes and events from BFO's). One reason why we investigate the BFO ontology of processes is that it may remain relatively underspecified up to date. For example, process profiles [5] have been before proposed in the BFO community: two heart beating processes of the \"same rate\" can be analyzed as having as parts two instances of the same process profile universal such as 72bpm rate process profile. But they have been eventually left out of the latest BFO version [3]. For a noteworthy recent work on processes in BFO, Jarrar & Ceusters [6] propose a classification of processes in BFO by focusing on how some wellknown aspectual notions used to classify verbal phrases -viz. homeomericity, cumulativity, telicity, instantaneity, and atomicity -can be ontologically reinterpreted to build BFO-based process ontologies. It will be a valuable complementary study for us to consider carefully what kinds of changes processes are and on which conditions one process is the same as another.\nThe paper centers around the fundamental question of what is the identity of processes, or what is a set of necessary and jointly sufficient conditions (represented by a \"if and only if\" or \"iff\" clause) for two processes being identical. 2 It is organized as follows. Section 2 is devoted to preliminaries. On the basis of the recent work by Guarino, Baratella and Guizzardi [8] (henceforth \"GBG\"), Section 3 introduces two kinds of processes: specifically dependent continuant changes and spatial changes. Section 4 investigates a compositional approach to the identity of processes: it may be characterized with identity criteria for specifically dependent continuant changes and spatial changes, on the assumption that any process is a mereological sum of these two kinds of processes. Section 5 develops a causal approach to the identity of processes based on a dispositional view of processes according to which any process is a realization of some disposition. Section 6 offers discussion. Section 7 discusses related work. Section 8 concludes the paper." }, { "figure_ref": [], "heading": "Preliminaries 2.1. The basic structure of BFO", "publication_ref": [ "b8", "b9", "b1", "b10" ], "table_ref": [], "text": "BFO is an upper ontology that is theoretically underpinned by the realist methodology for ontology development [9], according to which ontologies should represent actual entities as described by science, as well as by perspectivalism: BFO is perspectival along two major dimensions, of continuants and occurrents and these dimensions may provide equally accurate descriptions of the same reality. Continuants persist in time: they maintain their identity and may gain or lose parts over the course of time. Occurrents unfold themselves through time. (Note that we will assume the framework of classical physics in this article as BFO often does, even if there are plans to extend BFO beyond the classical realm.)\nContinuants are further divided into independent continuants (such as material objects and spatial regions) and dependent continuants. Among dependent continuants are specifically dependent continuants, which depend (existentially) on at least one independent continuant. Two major subtypes of specifically dependent continuants are realizable entities and qualities. The former can be realized in processes of specific correlated types in which the bearer participates: e.g. the disposition of this glass to be broken, the function of this heart to pump blood, and the role of being a doctor (for more thoughts, see Toyoshima et al.'s [10] systematic study of realizable entities in BFO). They can be present even when not realized: this glass is fragile even if it is not broken, for instance. The latter are fully exhibited or manifested or realized if they are borne at all: e.g. color, shape, and mass.\nAmong realizable entities, a disposition in BFO is defined as: \"A realizable entity (…) that exists because of certain features of the physical makeup of the independent continuant that is its bearer. One can think of the latter as the material basis of the disposition in question\" ( [2], p. 178). Typical examples of dispositions include fragility (the disposition to break when pressed with sufficient force) and solubility (the disposition to dissolve when put in a solvent). The material basis of a disposition is some material part(s) of the disposition bearer in virtue of which the disposition exists. BFO also describes a disposition as an \"internally grounded realizable entity\": if a disposition ceases to exist, then the physical makeup of the bearer is thereby changed. The fragility of this glass has as material basis some molecules of the glass and the glass is physically changed when it is no longer fragile, for instance.\nAs for occurrents, a process is an occurrent that exists in time by occurring, has temporal parts, and depends on at least one independent continuant as participant. A spatiotemporal region is an occurrent at or in which occurrent entities (notably processes) can be located. A temporal region is an occurrent that results from the projection of a spatiotemporal region onto the temporal dimension (for more thoughts, see Galton's [11] discussion on temporal and spatiotemporal regions in BFO)." }, { "figure_ref": [], "heading": "Ontology of dispositions", "publication_ref": [ "b13", "b13" ], "table_ref": [], "text": "We will use a BFO-compliant extended theory of dispositions along with previous works [10][12][13] [14]. To be realized in a process, a disposition needs to be triggered by some other process: a process of pressing this glass with sufficient force triggers the fragility of the glass, which is realized in a process of glass-breaking. Note that dispositions may exist even if they are not realized or even triggered: a glass is fragile even if it never breaks or even if it never undergoes any shock. We will also utilize what Barton et al. [14] call the \"PARTHOOD\" model of dispositions, according to which a part of a realization of a disposition is also a realization of this disposition: for instance, the short cracking process of this glass that immediately precedes its splitting into many pieces is a realization of the fragility of the glass because it is part of the glass-breaking process (in which the fragility is realized)." }, { "figure_ref": [ "fig_0" ], "heading": "Categories and relations", "publication_ref": [ "b14" ], "table_ref": [ "tab_1" ], "text": "We will introduce the terms for BFO categories and their associated unary predicates -see the taxonomy depicted in Figure 1 (where a type A being a subtype of a type B implies all instances of A being instances of B). We will also introduce the terms for relations and their associated relational predicates -see Table 1 for a list of relational predicates and their explanation. As for parthood, we will assume so-called classical (extensional) mereology (e.g. [15], Section 2).\nIn formalization, variables and individual constants stand for particulars, predicates stand for universals and defined classes (unary predicates) and relations, and free variables are universally quantified. We will employ conventional logical symbols of first-order logic with identity. In the text, terms for instances and classes will be boldified and italicized, respectively: for example, this particular person John and the human type Human." }, { "figure_ref": [], "heading": "Continuant", "publication_ref": [], "table_ref": [], "text": "Independent " }, { "figure_ref": [], "heading": "Simple processes", "publication_ref": [ "b15", "b17" ], "table_ref": [], "text": "A process is, in nature, a change of some participant(s) of this process. GBG define simple events as qualitative changes by triples <o,q,t> where o is the object of change, q is the subject of changewhich is a quality, as found in the Descriptive Ontology for Linguistic and Cognitive Engineering (DOLCE) [16][17]and t is the time during which the change happens. Either q inheres in o, in which case this is a direct qualitative change; or q inheres in a part of o, in which case this is an indirect qualitative change. For example, when a person gesticulates by moving his hand: the gesticulation is an indirect change of the person, whereas the hand moving is a direct change of his hand.\nWe will here endorse a similar view on the metaphysics of processes 5 , applied to BFO:independent continuants: when an independent continuant changes, it always changes with respect to some aspect of that independent continuant. Note however that position is a quality in GBG's sense but not in BFO. Therefore, we need to introduce two kinds of simple processes involving what GBG call a direct qualitative change of an independent continuant: specifically dependent continuant (SDC) changes, namely change with respect to a single specifically dependent continuant of the independent continuant (presented in Section 3.1); and spatial changes, namely change with respect to the spatial regions that its parts occupy (presented in Section 3.2). We will deal in this section with direct qualitative change in the sense of GBG; and come back in Section 6.3 to a possible reduction of indirect qualitative change to direct qualitative change. For an illustrative purpose, we will employ Davidson's [18] famous example in which a sphere s1 is rotating and heating at the same time." }, { "figure_ref": [], "heading": "Specifically dependent continuant change", "publication_ref": [ "b1", "b18" ], "table_ref": [], "text": "In the driving example, we can identify this process pheat of s1 heating up such that pheat has s1 as participant. Then pheat is a change of s1 with respect to the temperature, say temperature1which is a qualityof s1. Suppose that s1 is at 60 degrees Celsius at time t1 and at 70 degrees Celsius at time t2 such that pheat temporally occupies a connex temporal region encompassing t1 and t2. 6 Following BFO's standard view ( [2], p. 97), we introduce the term \"determinate\" for universals: roughly, a universal X is determinate of a universal Y if and only if being X is a specific way of being Y [19]. The received analysis of this case of temperature change appeals to two determinates of the determinable Temperature, namely 60°C Temperature and 70°C Temperature: temperature1 (which is an instance of Temperature) is an instance of 60 °C Temperature at time t1 and an instance of 70 °C Temperature at time t2.\nWe can say that pheat is a process in which s1 changes with respect to temperature1. The analysis of this example can lead to the following definition of the term \"specifically dependent continuant change\" such that pheat is a specifically dependent continuant change: specifically dependent continuant change =def. A process that is a change of an independent continuant with respect to a single specifically dependent continuant thereof.\nSDC changes can concern qualities such as temperature, but also realizable entitiesfor example, a metal sphere becoming more or less ductile (where ductility is a disposition to change shape). They can also concern the coming into existence of an SDC (e.g. the appearance of transparency of a portion of sand as it is transformed into glass) or the ceasing to exist of an SDC (e.g. the disappearance of structural integrity of a window as it is broken). We will henceforth employ the expressions, such as \"pheat is a change of temperature1\" (formally: PSDC(pheat, temperature1)), to characterize specifically dependent continuant changes.\nWe can then formalize the definition of a specifically dependent continuant change in terms of PSDC, which is taken to be functional:\nD1 SDCC(p) =def. PRO(p) ∧ ∃sdc PSDC(p,sdc)\n\"p is a specifically dependent change\" means: p is a process and there exist sdc such that p is a change of sdc." }, { "figure_ref": [], "heading": "Spatial change", "publication_ref": [], "table_ref": [], "text": "In our driving scenario, we can identify this process prot of s1 rotating such that prot has s1 as participant. Then prot is a change of s1 with respect to the spatial region that its parts occupy: parts of s1 occupy different spatial regions at different times over the course of prot. We propose the following definition of the term \"spatial change\" such that prot is a spatial change: spatial change =def. A process that is a change of an independent continuant with respect to the spatial region that some part thereof occupies.\nWhat is commonly called \"motion process\" is a spatial change, although we might imagine more exotic forms of spatial changes (e.g. teleportation)." }, { "figure_ref": [], "heading": "Discussion of simple processes", "publication_ref": [], "table_ref": [], "text": "Note that the categories of Spatial change and SDC change are not disjoint. For example, a change of shape of a sponge as I press it is a spatial change (as its parts are moving through space), but also arguably an SDC change, since shape is commonly considered as a quality (and thus an SDC); indeed, examples of qualities in BFO include \"the shape of this hand\" ([2], p. 96).\nNote also that simple processes are not all atomic: some simple processes can have as proper part other simple processes. For example, the spatial change of rotation of the sphere has as part the spatial changes of rotation of its upper hemisphere and of rotation of its lower hemisphere. Similarly, the color change of an apple from green to red has as part the color changes of its left half and of its right half." }, { "figure_ref": [], "heading": "Compositional approach to the identity of processes", "publication_ref": [], "table_ref": [], "text": "We first investigate a compositional approach to the identity of processes. The central idea is that a general criterion of the identity of process aggregates (that are sums of specifically dependent continuant changes and spatial changes) can be provided in terms of the identity criteria for specifically dependent continuant changes and spatial changes. This would provide a criterion of identity of processes if one hypothesizes that any process is a mereological sum of instances of those two kinds of changes." }, { "figure_ref": [], "heading": "Identity criterion for specifically dependent continuant changes", "publication_ref": [], "table_ref": [], "text": "The identity condition of a specifically dependent continuant change can be provided in terms of the relevant specifically dependent continuant(s) therein and the time at which it occurs:\nA1 PSDC(p1,sdc1) ∧ PSDC(p2,sdc2) ∧ OTR(p1,t1) ∧ OTR(p2,t2) → [p1=p2 ↔ (sdc1=sdc2 ∧ t1= t2)] If p1 is a change of sdc1,\np2 is a change of sdc2, p1 temporally occupies t1, and p2 temporally occupies t2, then: p1 is identical with p2 iff sdc1 is identical with sdc2 and t1 is identical with t2.\nFor instance, the identity of pheat is determined by the quality temperature1 and the temporal interval which pheat occupies." }, { "figure_ref": [], "heading": "Identity criterion for spatial changes", "publication_ref": [], "table_ref": [], "text": "The identity condition of a spatial change can be provided in terms of its participants 7 and the time at which it occurs:\nA2 SC(p1) ∧ SC(p2) ∧ OTR(p1,t1) ∧ OTR(p2,t2) → [p1=p2 ↔ [∀x(PCSP(x,p1) ↔ PCSP(x, p2)) ∧ t1= t2)]] If p1 is a spatial change,\np2 is a spatial change, p1 temporally occupies t1, and p2 temporally occupies t2, then: p1 is identical with p2 iff 1) for any x, x participates in the simple process p1 iff x participates in the simple process p2 and 2) t1 is identical with t2.\nFor example, the identity of prot is determined by its participants (in particular s1) and the temporal interval during which prot occurs." }, { "figure_ref": [], "heading": "Identity criterion for process aggregates", "publication_ref": [], "table_ref": [], "text": "Then, we introduce the term \"process aggregate\" that can be defined in natural and formal languages as follows, although it is formally inexpressible in first-order logic owing to the use of the natural number n (which will also apply to A3 below): process aggregate =def. A process that is a sum of multiple different simple processes.\nD2 PROA(p) =def. PRO(p) ∧ ∃n,sp1,…,spn (n≥2 ∧ ∧1≦i≦n SP(spi) ∧ SUM(p, sp1,…,spn) ∧ sp1≠sp2)\n\"p is a process aggregate\" means: p is a process, and there are at least two different simple processes sp1,…,spn such that p' is a sum of sp1,…,spn.\nNote that a process aggregate can be just a sum of specifically dependent continuant changes, just a sum of spatial changes or a sum of some SDC change(s) and some spatial change(s). To illustrate process aggregates with our driving example, we can think of a process aggregate that is the sum of the spatial change prot and many specifically dependent continuant changes (such as pheat).\nWe can also provide the identity condition of process aggregates. Informally speaking, two process aggregates are identical iff they have as parts the same simple processes. To put it formally:\nA3 [PROA(p1) ∧ PROA(p2) ∧ p1=p2] ↔ ∃n,sp1,…,spn, (n≥2 ∧ ∧1≦i≦n SP(spi) ∧ SUM(p1, sp1,…,spn) ∧ SUM(p2, sp1,…,spn) )\nTwo process aggregates p1 and p2 are identical iff there exist simple processes sp1,…,spn, such that both p1 and p2 are the sum of sp1,…,spn." }, { "figure_ref": [], "heading": "Hypothesizing a general criterion for the identity of processes", "publication_ref": [], "table_ref": [], "text": "Let us formulate the following hypothesis:\nProcess Decomposition Hypothesis (PDH) Any process is a simple process or process aggregate, i.e. a mereological sum of simple processes (namely SDC changes and spatial changes). (Formally: PRO(p) ↔ (PROA(p) ∨ SP(p)). ) In particular, the PDH implies that the identity criteria for simple processes and for process aggregates provides an identity criterion for processes in general.\nIf the PDH is valid, then we have provided a general criterion for the identity of processes through A1, A2, and A3. This hypothesis seems to make sense on at least a variety of examples. For example, a dinner might be analyzed as a process aggregate composed by the motion of various utensils, food and body parts; some quality changes of the food; etc. To take another example, an apple rotting is arguably a process aggregate composed by processes of the color of the apple changing, its chemical composition changing, etc." }, { "figure_ref": [], "heading": "Discussion of the PDH", "publication_ref": [ "b7", "b19" ], "table_ref": [], "text": "There are at least three kinds of processes (the two formers being identified by GBG [8]) that need to be consider to evaluate the cogency of the PDH: substantial change, e.g. a statue being created or destroyed; mereological change, e.g. a human gaining a tumor or losing a finger; and generically dependent change, e.g. a change of a document.\nFor each of those apparent changes, there is a variety of possible positions. One could endorse an eliminativist position, claiming that e.g. substantial change does not in fact exist. Alternatively, one could endorse a strong reductionist position, claiming that e.g. substantial change exists, but is in fact identical to some simple process or process aggregate. One could also endorse a weak reductionist position, claiming that e.g. substantial changes exist, and are based on simple processes that are \"prior\" or \"more basic\" [20], without being identical to such simple processes or their aggregates. Finally, one could endorse a non-reductionist position, claiming that e.g. substantial changes exist and are not based on more basic simple processes: they are all on equal ontological footing.\nAn eliminativist or strong reductionist position about all those three kinds of processes would not falsify the PDH; however, a weak reductionist or non-reductionist position of any of them would falsify it. Let us thus examine in turn each of those three changes." }, { "figure_ref": [], "heading": "Substantial change", "publication_ref": [ "b20" ], "table_ref": [], "text": "Substantial change is classically analyzed as the coming into existence or the ceasing to exist of a substancethat is, in BFO, of a material entity. However, it is not clear that BFO should accept such changes. Indeed, BFO aims at being non-multiplicativist, as it states that no two material entities can occupy the same spatial region [3]. To use a canonical example, if an amount of clay has the shape of a statue, BFO does not distinguish two entities, the amount of clay and the statue, but rather considers that a unique substance, the amount of clay, plays the role of being a statue [21]. To take another example, a sand castle should not be distinguished from the collection (or mereological sum) of sand grains that constitute it. But then, if this sand castle is washed away by the sea, it might not imply in BFO that a substance (the sand castle) disappears. Rather, one might consider that the collection (or sum) of sand grains that was shaped in a castle-shape (and that instantiated the Sand castle class) is now scattered (and thus does not instantiate Sand castle anymore). Many other examples (a body rotting, a house being destroyed, etc.) might be similarly analyzed. Such a framework might imply an eliminativist view towards substantial change at the macroscopical scale: what exists is not a creation or destruction of substance 8 , but rather a change in instantiating various classes by a material entity. This is not the only possible view though: BFO might also accept that an entity appears and disappears when the sand castle is created or destructed, although this may lead to some form of nonmultiplicativism (since the sand castle and the aggregate of sand grains arguably do not have the same identity conditions and are thus distinct entities).\nIf the latter analysis is correct, then there are indeed processes of creation or destruction of material entities, which are arguably not reducible to simple processes, and thus the PDH is false. If the former analysis is correct, then the question for the validity of the PDH becomes whether a change in material entity instantiation can be reducible to a simple process or to a sum of simple processes. One might have an abundant view of SDCs that accepts SDCs such as \"being a Sand Castle\" (maybe as a sum of several more basic SDCs such as \"being made of sand\" and \"having a castle-shape\"). In this case, ceasing to instantiate Sand Castle would amount to the disappearance of this SDC, and such cases of substantial changes would be reducible to SDC change. In case BFO would reject such SDCs, though, we would need to add to the list of simple processes the change of instantiation by a material entity for the PDH to remain valid." }, { "figure_ref": [], "heading": "Mereological change", "publication_ref": [], "table_ref": [], "text": "Let us now turn to mereological change: A process in which an independent continuant gains a part of loses a part. Consider e.g. the following processes:\n• ptpg: John gains a tumor in the pineal gland • pli: John loses his left index finger With a sufficiently general conception of quality, we can account for such changes as simple changes. Suppose indeed that we accept the existence of the following qualities:\n• qtpg: John's quality of having a tumor in the pineal gland • qli: John's quality of having a full index finger Then:\n• ptpg is a simple change of qtpg (namely, its coming to existence)\n• pli is a simple change of qli (namely, its ceasing to exist) Therefore, the (strong or weak) reduction of mereological change to SDC change depends on whether BFO's understanding of SDCs is broad enough to accommodate SDCs such as qtpg and qli (consider also \"being one-legged\" or \"having a mole on one's cheek\")." }, { "figure_ref": [], "heading": "GDC change", "publication_ref": [ "b21", "b22", "b23", "b1" ], "table_ref": [], "text": "Finally, a generically dependent continuant (GDC) can arguably change. A GDC that is \"dependent on one or other independent continuants and can migrate from one bearer to another\" ([2], p. 179). An important example of GDCs are Information Content Entities (ICEs) [22], such as documents. It is an open question whether ICEs can change, but that seems possible: a document can, indeed, be filled or evolveconsider e.g. this article that evolved through time until its final state (see Barton et al.'s [23] discussion on some difficulties related to the diachronic identity of ICEs). Another kind of GDC might be social GDCs. Although those are not fully conceptualized in BFO (see Brochhausen et al.'s [24] work though), those might be an important kind of GDCs. For example, Arp et al. [2] consider that there is a social GDC that we might describe as corresponding to the role of President of the USA; Donald's Trump role of president of the USA (that existed from January 2017 to January 2021) and Joe Biden's role of president of the USA (that exists since January 2021) are two SDCs that might be concretizations of such a social GDC. In case a new law would change the power or responsibilities of the president of the USA, then this social GDC would arguably change.\nAll GDCs need to be concretized, often in SDCs (although some informational entities might be concretized in processes since BFO-ISO [3]). Thus, a GDC change might be seen as a parasitic entity over the change of the SDCs that concretize it, or over the processes that concretize it. However, BFO does not endorse an eliminativist approach of GDC; thus, it seems natural to consider that GDC changes should also not be eliminated. BFO also does not endorse a strong reductionist approach of GDC on their concretization: it does not identify a GDC with its concretization (or the sum of its concretizations). Therefore, it also seems natural to refrain from identifying a GDC change with, e.g. the change of the SDCs that concretize it. On the other hand, a weak reductionist (or even maybe non-reductionist) approach of GDC change would seem natural in BFO." }, { "figure_ref": [], "heading": "Conclusion for the PDH", "publication_ref": [], "table_ref": [], "text": "Let us wrap up. Substantial change might be eliminated (but see Footnote 8) in favor of change of instantiation of a material entity, but it is an open question whether BFO would encompass a strong reductionist view of such latter changes. Mereological changes might be (strongly or weakly) reduced to quality changes. Finally, it does not seem that GDC changes can be eliminated or strongly reduced.\nThus, the PDH as formulated so far would be false. However, it might be saved by the combination of two moves: 1) endorsing a general enough view of SDC according to which change of material entity instantiation and mereological changes would be strongly reduced to SDC changes; and 2) widening the definition of simple processes in order to encompass not only SDC changes and spatial changes, but also GDC changes (and possibly material entity creations and destructions, in case BFO would accept such processes).\nThe second point implies in particular that we should spell out a criterion for the identity of GDC changes. A very straightforward criterion would then be a direct adaptation of the criterion (A1) proposed for SDCs above: Axiom for GDC changes PGDC(p1,gdc1) ∧ PGDC(p2,gdc2) ∧ OTR(p1,t1) ∧ OTR(p2,t2) → [p1=p2 ↔ (gdc1=gdc2 ∧ t1= t2)]" }, { "figure_ref": [], "heading": "Causal approach to the identity of processes 5.1. A dispositional view of processes", "publication_ref": [ "b11", "b24", "b25", "b26" ], "table_ref": [], "text": "Since the compositional criterion of identity of processes proposed above crucially depends on the PDH, it would be nice to have another criterion of identity that would not rely on it. Thus, we next investigate a causal approach to the identity of processes. For this purpose, we will utilize a dispositional view of processes. The basic idea is that processes are entities that are causally brought about, and causation can be analyzed in terms of dispositions. For instance, Röhl & Jansen [12] maintain that: \"dispositions connect the static structure of the world, i.e. the natural kinds of continuants, with the dynamical structure, i.e. the types of possible and actual causal processes\" (ibid., p. 3). For that matter, the dispositional theory of causality has been actively developed in philosophical ontology [25] [26].\nOne way to formalize such a dispositional view of processes is to hypothesize that any process is a realization of some disposition of an independent continuant that participates in that process:\nA4 PRO(p) → ∃x,t,d(PC(x,p,t) ∧ REAL(d,p,t) ∧ INH(d,x))\nFor any process p, there exist x, t, and d such that x participates in p at t, d is realized in p at t, and d inheres in x.\nTo illustrate A4 with our driving example in the case of specifically dependent continuant changes, pheat is a realization of the disposition of s1 to get heated. In the case of spatial changes, we could consider prot as a realization of the disposition of s1 to be realized in a process of rotational movement (cf. the view of Newtonian force as a disposition to be realized in a process of accelerated motion of the force bearer [27])." }, { "figure_ref": [], "heading": "A dispositional criterion for the identity of processes", "publication_ref": [ "b27", "b28" ], "table_ref": [], "text": "Let us begin by considering two criteria that do not involve dispositions. One of the simplest criteria for the identity of processes is the identity of their participant(s) because a process depends on some independent continuant as a participant: C1 Processes are identical iff they have the same participant(s) at any time. Formally: PRO(p1) ∧ PRO(p2) → [p1= p2 ↔ ∀x,t (PC(x,p1,t) ↔ PC(x,p2,t))]\nAnother criterion is the identity of the spatiotemporal regions of processes and it is traditionally popular in the philosophy of processes and events (as championed by Quine [28] and the late Davidson [29]): C2 Two processes are identical iff they occupy the same spatiotemporal region. Formally: PRO(p1) ∧ PRO(p2) → [p1= p2 ↔ ∀str (OSTR(p1,str) ↔ OSTR(p2,str))] But neither C1 nor C2 succeeds in identifying processes that we can intuitively differentiate. Using the driving example, we would otherwise have the consequence that pheat and prot are both the same process by C1 (because they have the same participant, namely s1) and by C2 (because they occur in the same place at the same time). This consequence may be undesirable in formal ontology as we may need to distinguish these processes when representing them in information systems.\nLet us now turn to the criteria for the identity of processes that involve dispositions and their realizations. A straightforward dispositional criterion would be that two processes are identical iff they realize the same disposition(s) at the same time. We can formalize this statement as follows:\nA5 PRO(p1) ∧ PRO(p2) → [p1= p2 ↔ ∀d,t (REAL(d,p1,t) ↔ REAL(d,p2,t))]\nTwo processes p1 and p2 are identical iff: for any disposition d and any temporal region t, d is realized in p1 at t1 iff d is realized in p2 at t.\nAccording to A5, for instance, pheat and prot are both different processes because, as we have seen above, they are (albeit simultaneous) realizations of different dispositions of s1: the disposition to get heated and the disposition to rotate, respectively." }, { "figure_ref": [], "heading": "Illustration: Clarifying BFO:History from a dispositional perspective", "publication_ref": [ "b29" ], "table_ref": [], "text": "To illustrate the dispositional view of processes, we will analyze the subtype of Process called \"History\" in BFO, which is especially important, as this category enables us to define an injection from material entities (and sites) to processes. A BFO:History is: \"A BFO: process that is the sum of the totality of processes taking place in the spatiotemporal region occupied by a material entity or site\" ([2], p. 179). To be concrete, let us consider John's history from a dispositional viewpoint. \"For example, the history of John is the sum of all processes that have occurred within John throughout the course of his entire life, at all granularities\" (ibid., p. 123).\nA naïve attempt to analyze John's history dispositionally would be to claim that it is the sum of all realizations of dispositions that inhere in any (proper or improper) part of John during his whole life. But this attempt fails because there exist some dispositions of John that are realized in processes that are not part of his history. Indeed, suppose that John is moving a pen at time tmove. This process pmove is a realization of John's disposition dJohn to move something. The spatiotemporal region strmove occupied by pmove spatially projects onto the mereological sum of the spatial region occupied by John and the spatial region occupied by John's pen. Then pmove is not part of John's history because John's history occupies only the spatiotemporal region occupied by John. Therefore, dJohn is a disposition of John that is realized in a process (namely pmove) that is not part of his history.\nOn closer examination, however, dJohn is also presumably realized in a pen moving-related process that is part of John's history. To see this, we will introduce Loebe's [30] notion of processual role in his theory of roles. A processual role is part of a process such that it represents the way a single participant behaves in that process. To borrow his example, when John moves his pen, he participates in the process of John moving his pen -which has as participant not only John but also his pen -and he also participates in the associated processual role that has as participant John but not his pen. 9Let us now go back to the example of John's history. Recall that John's disposition dJohn to move something is realized in the process pmove of John moving his pen. From the perspective of processual roles, we can think of the process p'move of John moving simpliciter which is part of pmove and which has as participant John but not his pen. Assume the PARTHOOD model of dispositions (introduced in Section 2.2). Since dJohn is realized in pmove, dJohn is also realized in p'move because, given the PARTHOOD model, a part of a realization of a disposition is also a realization of this disposition. Then, p'move is part of John's history, as it occupies the spatiotemporal region occupied by John. Hence, dJohn is realized in a process (namely p'move) that is part of John's history.\nIn summary, it is not the case that the history of an independent continuant is the sum of all realizations of dispositions that inhere in any (proper or improper) part of the independent continuant during its whole life, as is shown by dJohn and pmove in our example of John's moving his pen. We may hypothesize however that there is a subset of realizations (e.g. p'move) of dispositions that inhere in parts of the independent continuant during its existence whose sum is the history of the independent continuant." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [ "b12" ], "table_ref": [], "text": "The causal approach can specify the identity of processes more directly than the compositional approach. However, it is committed to the potentially controversial these that any process is a realization of some disposition. To see the difficulty of this thesis, consider temporally discontinuous processes such as \"my today eating process\" in which I had breakfast in the morning, lunch in the afternoon, and dinner in the evening. We might hypothesize that such a discontinuous process is a realization of a single disposition to eat. Similarly, the mereological sum of the parts of a concert before and after the intermission might be analyzed as a realization of the disposition of the orchestra to play. Or consider a conference running over several days (namely, what happens during the conference itself, excluding the breaks to eat, sleep, etc.): this might be seen as a realization of the disposition of the agents participating in the conference to give talks, raise questions, provide responses, etc. A mereological theory of dispositions [13] would be useful to characterize such complex dispositions." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "We will discuss the alleged problem of too many processes (Section 6.1), a possible reduction of spatial changes to specifically dependent continuant changes (Section 6.2), and a possible reduction of indirect qualitative changes to direct qualitative changes (Section 6.3)." }, { "figure_ref": [], "heading": "Too many processes?", "publication_ref": [], "table_ref": [], "text": "One might worry that this view would lead to a too large number of processes. Indeed, for every process that extends over a temporally interval, there exists a different sub-process on each sub-interval of time. In case every time interval has an infinity of sub-intervals, this implies that an infinity of such sub-processes exists. Consider for example a process of John walking during time interval i1. Then there exists a process of John walking during the first half of i1, of him walking during the second fifth of i1, of him walking during the 12 th sixteenth of i1, etc. However, this does not lead for us to any problematic form of multiplicativism, as those sub-processes are parts of the larger processin the same way that a material entity may be composed of many (maybe an infinity of) material entities." }, { "figure_ref": [], "heading": "A possible reduction of spatial changes to specifically dependent continuant changes", "publication_ref": [ "b30" ], "table_ref": [], "text": "We distinguished two kinds of simple processes: specifically dependent continuant changes and spatial changes. We could think however that Spatial change is a subtype of Specifically dependent continuant change on an auxiliary assumption. According to Barton et Ethier's [31] ontological analysis of the term \"velocity\", an object-velocity is a disposition of the moving object to move. The ontology of the object-velocity could enable a spatial change of an independent continuant to be interpreted as a process that is a change of its object-velocity, on the condition that we would add (as GBG do) the notion of \"stative change\" when the specifically dependent continuant of an independent continuant does not change (to account for the case of a uniform motion process, where the object-velocity of the moving entity does not change)." }, { "figure_ref": [], "heading": "A possible reduction of indirect qualitative changes to direct qualitative changes", "publication_ref": [ "b32" ], "table_ref": [], "text": "Let us now explain how we can deal with indirect change in the sense defined by GBG as merely SDC change. Consider apple0, which green at t1. That is, the skin of apple0 (which we will call skin0) is green. This means that there is a quality color_s0 that inheres in skin0 and that instantiates the universal Green at t1. However, in such situations, we often speak more simply of \"the color of apple0\". This could be understood as implying the existence of a quality color_a0 that would inhere in apple0.\nHere too, it instantiates the universal Green at t1. Then, color_a0 and color_s0 are strongly related: in a sense, they reflect the same portion of reality (assuming for simplicity that the skin of an apple cannot be removed from the apple), and they always instantiate the same determinate universal of the determinable Color. This means that when the apple becomes red at t2, both color_a0 and color_s0 instantiate the universal Red. However, as the former inheres in apple0 and the latter in skin0, they cannot be identical. Therefore, if we accept that both the apple and its skin have a color, and that a quality inheres in only one bearer, we seem to be committed to the following informal \"Principle of Quality Expansion\" (on the model of Lombard's [32] Principle of Event Expansion, analyzed by GBG) or \"PQE\": Principle of Quality Expansion (PQE) If an independent continuant x has as part y, then: for any quality q of y, there is a quality q' of x such that q and q' correspond to the same portion of reality. (In particular, q changes whenever q' changes.)\nWe could elucidate the term \"correspond to the same portion of reality\" by means of truthmakers [33]: something in virtue of which a proposition is true (where the term \"proposition\" can be intuitively understood, its ontological nature being left aside).\nIf we accept the PQE, we can make sense of both direct and indirect qualitative change (in the sense of GBG) as simple SDC changes in BFO: what they would analyze as the direct qualitative change <skin0, color_s0, t1> would correspond to our SDC change of color_s0, whereas what they would analyze as the indirect qualitative change <apple0, color_s0, t1> would correspond to our SDC change of color_a0.\nNote that this way to represent indirect changes is optional to our proposal: one might refuse to duplicate color_s0 into color_a0 and only accepts that the apple's skinnot the applehas a color. In that case, one might speak of direct qualitative change and indirect qualitative change as GBG do, and refrain from accepting the entity color_a0. However, by duplicating the quality of the color of the apple, we manage to reduce all qualitative changes to a same kind of SDC change. There is thus a tradeoff between the number of introduced entities (e.g. in the apple scenario, two color qualities corresponding to the same reality vs. one) and the number of endorsed kinds of changes (only one kind of SDC change vs. both direct and indirect SDC changes). This view has another advantage insofar as it arguably accounts better for existing practices, as ontologies often consider qualities such as the color of an appleeven if it is more fundamentally a part of the apple (its skin) that is responsible for its color." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b33", "b27", "b34", "b17", "b35", "b7", "b36", "b37", "b38", "b39", "b40", "b41", "b43" ], "table_ref": [], "text": "There is a huge body of philosophical literature on processes (or events, which may be a term more frequently used in philosophy). Although its comprehensive survey (e.g. [34]) is outside the purview of this article, there are two prominent views of them often called the \"coarse-grained view\" and the \"finegrained view\". One typical version of the coarse-grained view says that processes are identical iff they occupy the same spatiotemporal region [28][29]which we formalized as C1 and critically examined in Section 5.2. The fine-grained view, by contrast, characterizes the identity of processes in terms of properties in their broad sense (whether universals or particulars), as is illustrated by Kim's [35] view of processes as property exemplifications. By centering around an ontology of specifically dependent continuants such as dispositions, both compositional and causal approaches to the identity of processes we proposed can naturally belong to the group of the fine-grained view. It is worth remarking that the early Davidson [18] proposes a causal criterion for the identity of processes (\"Events are identical iff they have the same causes and effects\") and we may have proposed a dispositional version of such causal criterion.\nIn formal ontology, different upper ontologies develop different ontologies of processes and events (see e.g. Rodrigues & Abel's [36] general review). Guarino et al. (\"GBG\") [8] provide arguably one of the most systematic and general ontological analyses of events; indeed, we leveraged key elements of their work in developing a compositional approach to the identity of processes in Section 4. An alternative, considerably different view of processes and events from BFO's is that processes are mutable temporally extended entities and thus do change themselves while events are immutable temporally extended entities and thus do not change [37] (cf. [38] from a philosophical perspective). Events in this twofold ontology of occurrents would correspond to processes in BFO, while processes therein have no current equivalent in BFO. There are also many other views of the distinction between processes and events. To take just a few examples: processes are continuants rather than occurrents such as events [39] [40]; processes are patterns of occurrence, whose concrete realizations may be viewed as events or states [41]; and processes are physical entities, whereas events are mental and social entities [42][43] [44]." }, { "figure_ref": [], "heading": "Conclusion and future work", "publication_ref": [ "b13", "b4", "b40" ], "table_ref": [], "text": "We investigated the identity of processes with a focus on the BFO category of process. The resulting two approaches are the compositional approach that is based on two simple kinds of processes (specifically dependent continuant changes and spatial changes) and the causal approach that is based on a dispositional view of processes. In the future we will further each of these two approaches. As for the compositional approach, we will scrutinize the PDH based on the conclusion for it that is given in Section 4.5.4. As for the causal approach, it is worth investigating the relationship between the identity of processes and the identity of dispositions [14]. An important question will be whether the compositional and causal criteria lead to the same results concerning the identity of processes or not. Our long-term goal is to integrate both approaches so as to develop a systematic theory of the identity of processes, in the hope that the resulting theory will help to clarify various process-related entities such as process profiles [5] whose introduction is motivated to explain the same aspect of different processesand states (for initial thoughts, see Galton's [41] discussion that the term \"state\" may refer to two different entities: a continuant entity and an occurrent entity)." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We benefited from interesting discussions on related topics: with Nicola Guarino and Riccardo Baratella on their theory of events and qualities, and with Alan Ruttenberg on substance creation and destruction in BFO. Fumiaki Toyoshima is financially supported by the Japan Society for the Promotion of Science (JSPS)." } ]
This paper aims to explore processes and their identity with a focus on the upper ontology Basic Formal Ontology (BFO). We begin with a classification based on two basic classes of changes of independent continuants: changes with respect to a single specifically dependent continuant thereof or with respect to the spatial region that its parts occupy. We accordingly distinguish two kinds of simple processes: specifically dependent continuant changes and spatial changes. Next, we investigate a compositional approach to the identity of processes: the identity of any process is determined by the identity of the simple processes that compose them. Then, we consider a causal approach to the identity of processes with recourse to a dispositional view of processes according to which any process is a realization of some disposition. We also examine assumptions on which these two approaches to the identity of processes are based.
Two Approaches to the Identity of Processes in BFO
[ { "figure_caption": "Figure 1 :1Figure 1: Taxonomy of BFO categories and their associated unary predicates (categories added by us are underlined)", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "List of relational predicates, their domains, ranges and semantic reading, and their functionality.", "figure_data": "", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": "INH(x,y)x (SDC) inheres in y (IC)FunctionalOSTR(x,y)x (PRO) spatiotemporally occupies y (STR)FunctionalOTR(x,y)x (PRO) temporally occupies y (TR)FunctionalP(x,y)x is part of y/PC(x,y,t)x (IC) 3 participates in y (PRO) at t (TR)/PCSP(x,y)x (IC) participates in simple process y (SP)/ 4PSDC(x,y)x (SDCC) is a change of y (SDC)FunctionalREAL(x,y,t)x (REA) is realized in y (PRO) at t (TR)/SUM(y,x1, …, xn)y is a mereological sum of x1, …, xnFunctional on y", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" } ]
Fumiaki Toyoshima; Adrien Barton
[ { "authors": "M Ashburner; C A Ball; J A Blake; D Botstein; H Butler; J M Cherry; A P Davis; K Dolinski; S S Dwight; J T Eppig; M A Harris; D P Hill; L Issel-Tarver; A Kasarskis; S Lewis; J C Matese; J E Richardson; M Ringwald; G M Rubin; G Sherlock", "journal": "Nat Genet", "ref_id": "b0", "title": "Gene ontology: tool for the unification of biology. The Gene Ontology Consortium", "year": "2000-05" }, { "authors": "R Arp; B Smith; A D Spear", "journal": "MIT Press", "ref_id": "b1", "title": "Building ontologies with Basic Formal Ontology", "year": "2015" }, { "authors": "", "journal": "", "ref_id": "b2", "title": ".internet. Information technology -Top-level ontologies (TLO) -Part 2: Basic Formal Ontology (BFO)", "year": "2020-03" }, { "authors": "J N Otte; J Beverley; A Ruttenberg", "journal": "Appl Ontol", "ref_id": "b3", "title": "BFO: Basic Formal Ontology", "year": "2022-01" }, { "authors": "B Smith", "journal": "Ratio", "ref_id": "b4", "title": "Classifying processes: an essay in applied ontology", "year": "2012" }, { "authors": "M Jarrar; W Ceusters", "journal": "", "ref_id": "b5", "title": "Classifying processes and Basic Formal Ontology", "year": "2017" }, { "authors": "P Garbacz", "journal": "Synthese", "ref_id": "b6", "title": "A new perspective on criteria of identity", "year": "2022" }, { "authors": "N Guarino; R Baratella; G Guizzardi", "journal": "Appl Ontol", "ref_id": "b7", "title": "Events, their names, and their synchronic structure", "year": "2022-05" }, { "authors": "B Smith; W Ceusters", "journal": "Appl Ontol", "ref_id": "b8", "title": "Ontological realism: a methodology for coordinated evolution of scientific ontologies", "year": "2010-11" }, { "authors": "F Toyoshima; A Barton; L Jansen; J F Ethier", "journal": "IOS Press", "ref_id": "b9", "title": "Towards a unified dispositional framework for realizable entities", "year": "2018" }, { "authors": "A Galton", "journal": "IOS Press", "ref_id": "b10", "title": "The treatment of time in upper ontologies", "year": "2018" }, { "authors": "J Röhl; L Jansen", "journal": "J Biomed Semant", "ref_id": "b11", "title": "Representing dispositions", "year": "2011-08" }, { "authors": "A Barton; L Jansen; J F Ethier", "journal": "", "ref_id": "b12", "title": "A taxonomy of disposition-parthood", "year": "2017" }, { "authors": "A Barton; O Grenier; L Jansen; J F Ethier", "journal": "IOS Press", "ref_id": "b13", "title": "The identity of dispositions", "year": "2018" }, { "authors": "A C Varzi; A J Cotnoir", "journal": "Oxford University Press", "ref_id": "b14", "title": "Mereology", "year": "2021" }, { "authors": "C Masolo; S Borgo; A Gangemi; N Guarino; A Oltramari", "journal": "", "ref_id": "b15", "title": "Wonderweb deliverable D18 -ontology library (final)", "year": "2003" }, { "authors": "S Borgo; R Ferrario; A Gangemi; N Guarino; C Masolo; D Porello; E M Sanfilippo; L Vieu", "journal": "Appl Ontol", "ref_id": "b16", "title": "DOLCE: A Descriptive Ontology for Linguistic and Cognitive Engineering", "year": "2022-01" }, { "authors": "D Davidson", "journal": "", "ref_id": "b17", "title": "The individuation of events", "year": "1969" }, { "authors": "J Wilson", "journal": "", "ref_id": "b18", "title": "Determinables and determinates", "year": "2023" }, { "authors": "R Van Riel; Van Gulick; R ", "journal": "", "ref_id": "b19", "title": "Scientific reduction", "year": "2019" }, { "authors": "", "journal": "", "ref_id": "b20", "title": "Tutorial on Basic Formal Ontology", "year": "2019-02-09" }, { "authors": "W Ceusters; B Smith", "journal": "", "ref_id": "b21", "title": "Aboutness: towards foundations for the Information Artifact Ontology", "year": "2015" }, { "authors": "A Barton; F Toyoshima; L Vieu; P Fabry; J F Ethier", "journal": "IOS Press", "ref_id": "b22", "title": "The mereological structure of informational entities", "year": "2020" }, { "authors": "M Brochhausen; M B Almeida; L Slaughter", "journal": "Walter de Gruyter", "ref_id": "b23", "title": "Towards a formal representation of document acts and the resulting legal entities", "year": "2013" }, { "authors": "R L Mumford S Anjum", "journal": "Oxford University Press", "ref_id": "b24", "title": "Getting causes from powers", "year": "2011" }, { "authors": "N Williams", "journal": "Oxford University Press", "ref_id": "b25", "title": "The powers metaphysic", "year": "2019" }, { "authors": "A Barton; R Rovetto; R Mizoguchi", "journal": "IOS Press", "ref_id": "b26", "title": "Newtonian forces and causation: a dispositional account", "year": "2014" }, { "authors": "Wvo Quine", "journal": "Basil Blackwell", "ref_id": "b27", "title": "Events and reification", "year": "1985" }, { "authors": "D Davidson", "journal": "Basil Blackwell", "ref_id": "b28", "title": "Reply to Quine on events", "year": "1985" }, { "authors": "F Loebe", "journal": "Appl Ontol", "ref_id": "b29", "title": "Abstract vs. social roles -Towards a general theoretical account of roles", "year": "2007" }, { "authors": "A Barton; J F Ethier", "journal": "IOS Press", "ref_id": "b30", "title": "The two ontological faces of velocity", "year": "2016" }, { "authors": "L B Lombard", "journal": "Routledge", "ref_id": "b31", "title": "Events: a metaphysical study", "year": "1986" }, { "authors": "F Macbride; Truthmakers", "journal": "", "ref_id": "b32", "title": "The Stanford Encyclopedia of Philosophy", "year": "2022" }, { "authors": "R Casati; A Varzi", "journal": "", "ref_id": "b33", "title": "Events", "year": "2020" }, { "authors": "J Kim", "journal": "Reidel", "ref_id": "b34", "title": "Events as property exemplifications", "year": "1976" }, { "authors": "F H Rodrigues; M Abel", "journal": "Appl Ontol", "ref_id": "b35", "title": "What to consider about events: A survey on the ontology of occurrents", "year": "2019" }, { "authors": "A Galton; R Mizoguchi", "journal": "Appl Ontol", "ref_id": "b36", "title": "The water falls but the waterfall does not fall: new perspectives on objects, processes and events", "year": "2009" }, { "authors": "R Stout", "journal": "Oxford University Press", "ref_id": "b37", "title": "Process, action, and experience", "year": "2018" }, { "authors": "R Stout", "journal": "Processes. Philosophy", "ref_id": "b38", "title": "", "year": "1997" }, { "authors": "A Galton", "journal": "IOS Press", "ref_id": "b39", "title": "On what goes on: the ontology of processes and events", "year": "2006" }, { "authors": "A Galton", "journal": "", "ref_id": "b40", "title": "Processes as patterns of occurrence", "year": "" }, { "authors": "K Gill", "journal": "Can. J. Philos", "ref_id": "b41", "title": "On the metaphysical distinction between processes and events", "year": "1993-09" }, { "authors": "G Kassel", "journal": "IOS Press", "ref_id": "b42", "title": "Processes endure, whereas events occur", "year": "2019" }, { "authors": "G Kassel", "journal": "Appl Ontol", "ref_id": "b43", "title": "Physical processes, their life and their history", "year": "2020-05" } ]
[ { "formula_coordinates": [ 5, 108.02, 327.1, 210.35, 11.04 ], "formula_id": "formula_0", "formula_text": "D1 SDCC(p) =def. PRO(p) ∧ ∃sdc PSDC(p,sdc)" }, { "formula_coordinates": [ 6, 105.02, 194.83, 293.22, 36.54 ], "formula_id": "formula_1", "formula_text": "A1 PSDC(p1,sdc1) ∧ PSDC(p2,sdc2) ∧ OTR(p1,t1) ∧ OTR(p2,t2) → [p1=p2 ↔ (sdc1=sdc2 ∧ t1= t2)] If p1 is a change of sdc1," }, { "formula_coordinates": [ 6, 105.02, 366.94, 260.35, 36.54 ], "formula_id": "formula_2", "formula_text": "A2 SC(p1) ∧ SC(p2) ∧ OTR(p1,t1) ∧ OTR(p2,t2) → [p1=p2 ↔ [∀x(PCSP(x,p1) ↔ PCSP(x, p2)) ∧ t1= t2)]] If p1 is a spatial change," }, { "formula_coordinates": [ 6, 105.02, 589.92, 418.19, 24.87 ], "formula_id": "formula_3", "formula_text": "D2 PROA(p) =def. PRO(p) ∧ ∃n,sp1,…,spn (n≥2 ∧ ∧1≦i≦n SP(spi) ∧ SUM(p, sp1,…,spn) ∧ sp1≠sp2)" }, { "formula_coordinates": [ 7, 105.02, 137.11, 303.91, 25.17 ], "formula_id": "formula_4", "formula_text": "A3 [PROA(p1) ∧ PROA(p2) ∧ p1=p2] ↔ ∃n,sp1,…,spn, (n≥2 ∧ ∧1≦i≦n SP(spi) ∧ SUM(p1, sp1,…,spn) ∧ SUM(p2, sp1,…,spn) )" }, { "formula_coordinates": [ 9, 105.02, 677.67, 261.99, 11.04 ], "formula_id": "formula_5", "formula_text": "A4 PRO(p) → ∃x,t,d(PC(x,p,t) ∧ REAL(d,p,t) ∧ INH(d,x))" }, { "formula_coordinates": [ 10, 108.02, 423.12, 338.02, 11.04 ], "formula_id": "formula_6", "formula_text": "A5 PRO(p1) ∧ PRO(p2) → [p1= p2 ↔ ∀d,t (REAL(d,p1,t) ↔ REAL(d,p2,t))]" } ]
2024-03-06
[ { "figure_ref": [], "heading": "Abstract", "publication_ref": [], "table_ref": [], "text": "Zero-shot 6D object pose estimation involves the detection of novel objects with their 6D poses in cluttered scenes, presenting significant challenges for model generalizability. Fortunately, the recent Segment Anything Model (SAM) has showcased remarkable zero-shot transfer performance, which provides a promising solution to tackle this task. Motivated by this, we introduce SAM-6D, a novel framework designed to realize the task through two steps, including in-* Equal contribution. † Corresponding author <kuijia@gmail.com>. stance segmentation and pose estimation. Given the target objects, SAM-6D employs two dedicated sub-networks, namely Instance Segmentation Model (ISM) and Pose Estimation Model (PEM), to perform these steps on cluttered RGB-D images. ISM takes SAM as an advanced starting point to generate all possible object proposals and selectively preserves valid ones through meticulously crafted object matching scores in terms of semantics, appearance and geometry. By treating pose estimation as a partial-topartial point matching problem, PEM performs a two-stage point matching process featuring a novel design of background tokens to construct dense 3D-3D correspondence," }, { "figure_ref": [ "fig_1" ], "heading": "Introduction", "publication_ref": [ "b18", "b4", "b4", "b11" ], "table_ref": [], "text": "Object pose estimation is fundamental in many real-world applications, such as robotic manipulation and augmented reality. Its evolution has been significantly influenced by the emergence of deep learning models. The most studied task in this field is Instance-level 6D Pose Estimation [18, 19,51,58,60,63], which demands annotated training images of the target objects, thereby making the deep models object-specific. Recently, the research emphasis gradually shifts towards the task of Category-level 6D Pose Estimation [7, 29-32, 56, 61] for handling unseen objects, yet provided they belong to certain categories of interest. In this paper, we thus delve into a broader task setting of Zero-shot 6D Object Pose Estimation [5,28], which aspires to detect all instances of novel objects, unseen during training, and estimate their 6D poses. Despite its significance, this zeroshot setting presents considerable challenges in both object detection and pose estimation.\nRecently, Segment Anything Model (SAM) [26] has garnered attention due to its remarkable zero-shot segmentation performance, which enables prompt segmentation with a variety of prompts, e.g., points, boxes, texts or masks. By prompting SAM with evenly sampled 2D grid points, one can generate potential class-agnostic object proposals, which may be highly beneficial for zero-shot 6D object pose estimation. To this end, we propose a novel framework, named SAM-6D, which employs SAM as an advanced starting point for the focused zero-shot task. Fig. 2 gives an overview illustration of SAM-6D. Specifically, SAM-6D employs an Instance Segmentation Model (ISM) to realize instance segmentation of novel objects by enhancing SAM with a carefully crafted object matching score, and a Pose Estimation Model (PEM) to solve object poses through a two-stage process of partial-to-partial point matching.\nThe Instance Segmentation Model (ISM) is developed using SAM to take advantage of its zero-shot abilities for generating all possible class-agnostic proposals, and then assigns a meticulously calculated object matching score to each proposal for ascertaining whether it aligns with a given novel object. In contrast to methods that solely focus on object semantics [5,40], we design the object matching scores considering three terms, including semantics, appearance and geometry. For each proposal, the first term assesses its semantic matching degree to the rendered templates of the object, while the second one further evaluates its appearance similarities to the best-matched template. The final term considers the matching degree based on ge- ometry, such as object shape and size, by calculating the Intersection-over-Union (IoU) value between the bounding boxes of the proposal and the 2D projection of the object transformed by a rough pose estimate. The Pose Estimation Model (PEM) is designed to calculate a 6D object pose for each identified proposal that matches the novel object. Initially, we formulate this pose estimation challenge as a partial-to-partial point matching problem between the sampled point sets of the proposal and the target object, considering the factors such as occlusions, segmentation inaccuracies, and sensor noises. To solve this problem, we propose a simple yet effective solution that involves the use of background tokens; specifically, for the two point sets, we learn to align their non-overlapped points with the background tokens in the feature space, and thus effectively establish an assignment matrix to build the necessary correspondence for predicting the object pose. Based on the design of background tokens, we further develop PEM with two point matching stages, i.e., Coarse Point Matching and Fine Point Matching. The first stage realizes sparse correspondence to derive an initial object pose, which is subsequently used to transform the point set of the proposal, enabling the learning of positional encodings. The second stage incorporates the positional encodings of the two point sets to inject the initial correspondence, and builds dense correspondence for estimating a more precise object pose. To effectively model dense interactions in the second stage, we propose an innovative design of Sparse-to-Dense Point Transformers, which realize interactions on the sparse versions of the dense features, and subsequently, distribute the enhanced sparse features back to the dense ones using Linear Transformers [12,24]." }, { "figure_ref": [ "fig_0" ], "heading": "SAM-6D", "publication_ref": [ "b8", "b5", "b3" ], "table_ref": [], "text": "For the two models of SAM-6D, ISM, built on SAM, does not require any network re-training or fine-tuning, while PEM is trained on the large-scale synthetic images of ShapeNet-Objects [4] and Google-Scanned-Objects [9] datasets provided by [28]. We evaluate SAM-6D on the seven core datasets of the BOP benchmark [54], including LM-O, T-LESS, TUD-L, IC-BIN, ITODD, HB, and YCB-V. The qualitative results are visualized in Fig. 1. SAM-6D outperforms the existing methods on both tasks of instance segmentation and pose estimation of novel objects, thereby showcasing its robust generalization capabilities.\nOur main contributions could be summarized as follows: • We propose a novel framework of SAM-6D, which realizes joint instance segmentation and pose estimation of novel objects from RGB-D images, and outperforms the existing methods on seven datasets of BOP benchmark. Recent studies have also investigated semantically segmenting anything due to the critical role of semantics in vision tasks. Semantic Segment Anything (SSA) [6] is proposed on top of SAM, aiming to assign semantic categories to the masks generated by SAM. Both PerSAM [72] and Matcher [34] employ SAM to segment the object belonging to a specific category in a query image by searching for point prompts with the aid of a reference image containing an object of the same category. CNOS [40] is proposed to segment all instances of a given object model, which firstly generates mask proposals via SAM and subsequently filters out proposals with low feature similarities against object templates rendered from the object model.\nFor efficiency, FastSAM [74] is proposed by utilizing instance segmentation networks with regular convolutional networks instead of visual transformers used in SAM. Additionally, MobileSAM [68] replaces the heavy encoder of SAM with a lightweight one through decoupled distillation." }, { "figure_ref": [], "heading": "Pose Estimation of Novel Objects", "publication_ref": [ "b0", "b2", "b40", "b41", "b45", "b2", "b0", "b40", "b4", "b9", "b16", "b19", "b16", "b4", "b40", "b47" ], "table_ref": [], "text": "Methods Based on Image Matching Methods within this group [1,28,33,38,39,41,42,46,50] often involve comparing object proposals to templates of the given novel objects, which are rendered with a series of object poses, to retrieve the best-matched object poses. For example, Gen6D [33], OVE6D [1], and GigaPose [41] are designed to select the viewpoint rotations via image matching and then estimate the in-plane rotations to obtain the final estimates. MegaPose [28] employs a coarse estimator to treat image matching as a classification problem, of which the recognized object poses are further updated by a refiner. Methods Based on Feature Matching Methods within this group [5,10,11,17,20,53] align the 2D pixels or 3D points of the proposals with the object surface in the feature space [21,52], thereby building correspondence to compute object poses. OnePose [53] matches the pixel descriptors of proposals with the aggregated point descriptors of the point sets constructed by Structure from Motion (SfM) for 2D-3D correspondence, while OnePose++ [17] further improves it with a keypoint-free SfM and a sparse-to-dense 2D-3D matching model. ZeroPose [5] realizes 3D-3D matching via geometric structures, and GigaPose [41] establishes 2D-2D correspondence to regress in-plane rotation and 2D scale. Moreover, [11] introduces a zero-shot category-level 6D pose estimation task, along with a self-supervised semantic correspondence learning method. Unlike the above one-stage point matching work, the unique contributions in our Pose Estimation Model are: (a) a two-stage pipeline that boosts performance by incorporating coarse correspondence for finer matching, (b) an efficient design of background tokens to eliminate the need of optimal transport with iterative optimization [48], and (c) a Sparse-to-Dense Point Transformer to effectively model dense relationship." }, { "figure_ref": [ "fig_1" ], "heading": "Methodology of SAM-6D", "publication_ref": [], "table_ref": [], "text": "We present SAM-6D for zero-shot 6D object pose estimation, which aims to detect all instances of a specific novel object, unseen during training, along with their 6D object poses in the RGB-D images. To realize the challenging task, SAM-6D breaks it down into two steps via two dedicated sub-networks, i.e., an Instance Segmentation Model (ISM) and a Pose Estimation Model (PEM), to first segment all instances and then individually predict their 6D poses, as shown in Fig. 2. We detail the architectures of ISM and PEM in Sec. 3.1 and Sec. 3.2, respectively." }, { "figure_ref": [], "heading": "Instance Segmentation Model", "publication_ref": [], "table_ref": [], "text": "SAM-6D uses an Instance Segmentation Model (ISM) to segment the instances of a novel object O. Given a cluttered scene, represented by an RGB image I, ISM leverages the zero-shot transfer capabilities of Segment Anything Model (SAM) [26] to generate all possible proposals M. For each proposal m ∈ M, ISM calculates an object matching score s m to assess the matching degree between m and O in terms of semantics, appearance, and geometry. The matched instances with O can then be identified by simply setting a matching threshold δ m .\nIn this subsection, we initially provide a brief review of SAM in Sec. 3.1.1 and then explain the computation of the object matching score s m in Sec. 3.1.2." }, { "figure_ref": [], "heading": "Preliminaries of Segment Anything Model", "publication_ref": [], "table_ref": [], "text": "Given an RGB image I, Segment Anything Model (SAM) [26] realizes promptable segmentation with various types of prompts P r , e.g., points, boxes, texts, or masks. Specifically, SAM consists of three modules, including an image encoder Φ Image , a prompt encoder Φ Prompt , and a mask decoder Ψ Mask , which could be formulated as follows:\nM, C = Ψ Mask (Φ Image (I), Φ Prompt (P r )),(1)\nwhere M and C denote the predicted proposals and the corresponding confidence scores, respectively.\nTo realize zero-shot transfer, one can prompt SAM with evenly sampled 2D grids to yield all possible proposals, which can then be filtered based on confidence scores, retaining only those with higher scores, and applied to Non-Maximum Suppression to eliminate redundant detections." }, { "figure_ref": [], "heading": "Object Matching Score", "publication_ref": [ "b44" ], "table_ref": [], "text": "Given the proposals M, the next step is to identify the ones that are matched with a specified object O by assigning each proposal m ∈ M with an object matching score s m , which comprises three terms, each evaluating the matches in terms of semantics, appearance, and geometry, respectively.\nFollowing [40], we sample N T object poses in SE(3) space to render the templates {T k } N T k=1 of O, which are fed into a pre-trained visual transformer (ViT) backbone [8] of DINOv2 [45], resulting in the class embedding\nf cls T k and N patch T k patch embeddings {f patch T k ,i } N patch T k i=1\nof each template T k . For each proposal m, we crop the detected region out from I, and resize it to a fixed resolution. The image crop is denoted as I m and also processed through the same ViT to obtain the class embedding f cls Im and the patch embeddings {f patch Im,j }\nN patch Im j=1\n, with N patch Im denoting the number of patches within the object mask. Subsequently, we calculate the values of the individual score terms. Semantic Matching Score We compute a semantic score s sem through the class embeddings by averaging the top K\nvalues from { <f cls Im ,f cls T k > |f cls Im |•|f cls T k | } N T k=1\nto establish a robust measure of semantic matching, with <, > denoting an inner product. The template that yields the highest semantic value can be seen as the best-matched template, denoted as T best , and is used in the computation of the subsequent two scores. Appearance Matching Score Given T best , we compare I m and T best in terms of appearance using an appearance score s appe , based on the patch embeddings, as follows:\ns appe = 1 N patch Im N patch Im j=1 max i=1,...,N patch T best < f patch Im,j , f patch T best ,i > |f patch Im,j | • |f patch T best ,i | . (2\n) s appe is utilized to distinguish objects that are semantically similar but differ in appearance. Geometric Matching Score In terms of geometry, we score the proposal m by considering factors like object shapes and sizes. Utilizing the object rotation from T best and the mean location of the cropped points of m, we have a coarse pose to transform the object O, which is then projected onto the image to obtain a compact bounding box B o . Afterwards, the Intersection-over-Union (IoU) value between B o and the bounding box B m of m is used as the geometric score s geo :\ns geo = B m B o B m B o .(3)\nThe reliability of s geo is easily impacted by occlusions. We thus compute a visible ratio r vis to evaluate the confidence of s geo , which is detailed in the supplementary materials. By combining the above three score terms, the object matching score s m could be formulated as follows: , where C is the number of feature channels. This simple design resolves the assignment problem of non-overlapped points in two point sets, and the partial-to-partial correspondence thus could be effectively built based on feature similarities. Specifically, we can first compute the attention matrix A as follows:\ns m = s sem + s appe + r vis • s geo 1 + 1 + r vis . (4)" }, { "figure_ref": [], "heading": "Pose Estimation Model", "publication_ref": [], "table_ref": [], "text": "A = [f bg m , F m ] × [f bg o , F o ] T ∈ R (Nm+1)×(No+1) ,(5)\nand then obtain the soft assignment matrix\nà à = Softmax row (A/τ ) • Softmax col (A/τ ),(6)\nwhere " }, { "figure_ref": [], "heading": "Feature Extraction", "publication_ref": [], "table_ref": [], "text": "The Feature Extraction module is employed to extract point-wise features F m and F o for the point sets P m and P o of the proposal m and the given object O, respectively.\nRather than directly extracting features from the discretized points P m , we utilize the visual transformer (ViT) backbone [8] on its masked image crop I m to capture patchwise embeddings, which are then reshaped and interpolated to match the size of I m . Each point in P m is assigned with the corresponding pixel embedding, yielding F m .\nWe represent the object O with templates rendered from different camera views. All visible object pixels (points) are aggregated across views and sampled to create the point set P o . Corresponding pixel embeddings, extracted using the ViT backbone, are then used to form F o ." }, { "figure_ref": [], "heading": "Coarse Point Matching", "publication_ref": [ "b47", "b5", "b24" ], "table_ref": [], "text": "The Coarse Point Matching module is used to initialize a coarse object pose R init and t init by estimating a soft assignment matrix Sc between sparse versions of P m and P o .\nAs shown in Fig. 3, we first sample a sparse point set\nP c m ∈ R N c m ×3\nwith N c m points from P m , and\nP c o ∈ R N c o ×3\nwith N c o points from P o , along with their respective sampled features F c m and F c o . Then we concatenate F c m and F c o with learnable background tokens, and process them through T c stacked Geometric Transformers [48], each of which consists of a geometric self-attention for intra-pointset feature learning and a cross-attention for inter-point-set correspondence modeling. The processed features, denoted as F c m and F c o , are subsequently used to compute the soft assignment matrix Ãc based on ( 5) and (6).\nWith Ãc , we obtain the matching probabilities between the overlapped points of P c m and P c o , which can serve as the distribution to sample multiple triplets of point pairs and compute pose hypotheses [14,25]. We assign each pose hypothesis R hyp and t hyp a pose matching score s hyp as:\ns hyp = N c m / p c m ∈P c m min p c o ∈P c o ||R T hyp (p c o -t hyp )-p c m || 2 .(7)\nAmong the pose hypotheses, the one with the highest pose matching score is chosen as the initial pose R init and t init inputted into the next Fine Point Matching module." }, { "figure_ref": [], "heading": "Fine Point Matching", "publication_ref": [ "b5", "b47", "b56", "b11", "b47", "b11" ], "table_ref": [], "text": "The Fine Point Matching module is utilized to build dense correspondence and estimate a more precise pose R and t.\nTo build finer correspondence, we sample a dense point set\nP f m ∈ R N f m ×3\nwith N f m points from P m , and\nP f o ∈ R N f o ×3\nwith N f o points from P o , along with their respective sampled features F f m and F f o . We then inject the initial correspondence, learned by the coarse point matching, through the inclusion of positional encodings. Specifically, we transform P f m with the coarse pose R init and t init and apply it to a multi-scale Set Abstract Level [47] to learn the positional encodings F p m ; similarly, positional encodings F p o are also learned for P f o . We then add F p m and F p o to F f m and F f o , concatenate each with a background token, and process them to yield F f m and F f o , resulting in the soft assignment matrix Ãf based on (5) and (6).\nHowever, the commonly used transformers [48,57] incur a significant computational cost when learning dense point features. The recent Linear Transformers [12,24], while being more efficient, exhibit less effective modeling of point interactions, since they implement attentions along the feature dimension. To address this, we propose a novel design of Sparse-to-Dense Point Transformer (SDPT), as shown in Fig. 3. Specifically, given two dense point features F f m and F f o , SDPT first samples two sparse features from them and applies a Geometric Transformer [48] to enhance their interactions, resulting in two improved sparse features, denoted as F f ′ m and F f ′ o . SDPT then employs a Linear Crossattention [12] to spread the information from F f ′ m to F f m , treating the former as the key and value of the transformer, and the latter as the query. The same operations are applied to F f ′ o and F f o to update F f o . In Fine Point Matching, we stack T f SDPTs to model the dense correspondence and learn the soft assignment matrix Ãf . We note that, in each SDPT, the background tokens are consistently maintained in both sparse and dense point features. After obtaining Ãf , we search within P f o for the corresponding points to all foreground points in P f m along with the probabilities, building dense correspondence, and compute the final object pose R and t via weighted SVD." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b8", "b43", "b42", "b15", "b26" ], "table_ref": [], "text": "In this section, we conduct experiments to evaluate our proposed SAM-6D, which consists of an Instance Segmentation Model (ISM) and a Pose Estimation Model (PEM). Datasets We evaluate our proposed SAM-6D on the seven core datasets of the BOP benchmark [54], including LM-O, T-LESS, TUD-L, IC-BIN, ITODD, HB, and YCB-V. PEM is trained on the large-scale synthetic ShapeNet-Objects [4] and Google-Scanned-Objects [9] datasets provided by [28], with a total of 2, 000, 000 images across ∼ 50, 000 objects. Implementation Details For ISM, we follow [40] to utilize the default ViT-H SAM [26] or FastSAM [74] for proposal generation, and the default ViT-L model of DINOv2 [44] to extract class and patch embeddings. For PEM, we set\nN c m = N c o = 196 and N f m = N f o = 2048\n, and use In-foNCE loss [43] to supervise the learning of attention matrices (5) for both matching stages. We use ADAM to train PEM with a total of 600,000 iterations; the learning rate is initialized as 0.0001, with a cosine annealing schedule used, and the batch size is set as 28. For each object, we use two rendered templates for training PEM. During evaluation, we follow [40] and use 42 templates for both ISM and PEM. Evaluation Metrics For instance segmentation, we report the mean Average Precision (mAP) scores at different Intersection-over-Union (IoU) thresholds ranging from 0.50 to 0.95 with a step size of 0.05. For pose estimation, we report the mean Average Recall (AR) w.r.t three error functions, i.e., Visible Surface Discrepancy (VSD), Maximum Symmetry-Aware Surface Distance (MSSD) and Maximum Symmetry-Aware Projection Distance (MSPD). For further details about these evaluation metrics, please refer to [ . We report the mean Average Recall (AR) among VSD, MSSD and MSPD, as introduced in Sec. 4. The symbol ' †' denotes the use of pose refinement proposed in [28].\nThe symbol ' * ' denotes the results published on BOP leaderboard. Our used masks of MaskRCNN [16] are provided by CosyPose [27]." }, { "figure_ref": [ "fig_0" ], "heading": "Instance Segmentation of Novel Objects", "publication_ref": [ "b4" ], "table_ref": [ "tab_2" ], "text": "We compare our ISM of SAM-6D with ZeroPose [5] 1. Qualitative results of ISM are visualized in Fig. 1." }, { "figure_ref": [], "heading": "Pose Estimation of Novel Objects", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Comparisons with Existing Methods", "publication_ref": [ "b4", "b40" ], "table_ref": [ "tab_8" ], "text": "We compare our PEM of SAM-6D with the representative methods, including MegaPose [28], ZeroPose [5], and Gi-gaPose [41], for pose estimation of novel objects. Quantitative comparisons, as presented in Table 5, show that our PEM, without the time-intensive render-based refiner [28], outperforms the existing methods under various mask pre-dictions. Importantly, the mask predictions from our ISM significantly enhance the performance of PEM, compared to other mask predictions, further validating the advantages of ISM. Qualitative results of PEM are visualized in Fig. 1." }, { "figure_ref": [], "heading": "Ablation Studies and Analyses", "publication_ref": [ "b47" ], "table_ref": [ "tab_6" ], "text": "We conduct ablation studies on the YCB-V dataset to evaluate the efficacy of individual designs in PEM, with the mask predictions generated by ISM based on SAM.\nEfficacy of Background Tokens We address the partialto-partial point matching issue through a simple yet effective design of background tokens. Another existing solution is the use of optimal transport [48] with iterative optimization, which, however, is time-consuming. The two solutions are compared in Table 3, which shows that our PEM with background tokens achieves results comparable to optimal transport, but with a faster inference speed. As the density of points for matching increases, optimal transport requires more time to derive the assignment matrices." }, { "figure_ref": [], "heading": "Efficacy of Two Point Matching Stages", "publication_ref": [ "b47" ], "table_ref": [ "tab_7", "tab_8" ], "text": "With the background tokens, we design PEM with two stages of point matching via a Coarse Point Matching module and a Fine Point Matching module. Firstly, we validate the effectiveness of the Fine Point Matching module, which effectively improves the results of the coarse module, as verified in Table 4. Further, we evaluate the effectiveness of the Coarse Point Matching module by removing it from PEM. In this case, the point sets of object proposals are not transformed and are directly used to learn the positional encodings in the fine module. The results, presented in Table 4, indicate that the removal of Coarse Point Matching significantly degrades the performance, which may be attributed to the large distance between the sampled point sets of the proposals and target objects, as no initial poses are provided.\nEfficacy of Sparse-to-Dense Point Transformers We design Sparse-to-Dense Point Transformers (SDPT) in the Fine Point Matching module to manage dense point interactions. Within each SDPT, Geometric Transformers [48] is employed to learn the relationships between sparse point sets, which are then spread to the dense ones via Linear Transformers [24]. We conduct experiments on either Geometric Transformers using sparse point sets with 196 points or Linear Transformers using dense point sets with 2048 points. The results, presented in Table 5, indicate inferior performance compared to using our SDPTs. This is because Geometric Transformers struggle to handle dense point sets due to high computational costs, whereas Linear Transformers prove to be ineffective in modeling dense correspondence with attention along the feature dimension." }, { "figure_ref": [], "heading": "Runtime Analysis", "publication_ref": [], "table_ref": [], "text": "We conduct evaluation on a server with a GeForce RTX 3090 GPU, and report in on the seven core datasets of BOP benchmark, indicating the efficiency of SAM-6D which avoids the use of timeintensive render-based refiners. We note that SAM-based method takes more time on pose estimation than FastSAMbased one, due to more object proposals generated by SAM." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we take Segment Anything Model (SAM) as an advanced starting point for zero-shot 6D object pose estimation, and present a novel framework, named SAM-6D, which comprises an Instance Segmentation Model (ISM) and a Pose Estimation Model (PEM) to accomplish the task in two steps. ISM utilizes SAM to segment all potential object proposals and assigns each of them an object matching score in terms of semantics, appearance, and geometry. PEM then predicts the object pose for each proposal by solving a partial-to-partial point matching problem through two stages of Coarse Point Matching and Fine Point Matching. The effectiveness of SAM-6D is validated on the seven core datasets of BOP benchmark, where SAM-6D significantly outperforms existing methods." }, { "figure_ref": [], "heading": "SAM-6D: Segment Anything Model Meets Zero-Shot 6D Object Pose Estimation", "publication_ref": [], "table_ref": [], "text": "and bilinearly interpolated to match the input resolution of 224×224 with 256 feature channels. Further specifics about the network can be found in Fig. 7. For a cropped observed RGB image, the pixel features within the mask are ultimately chosen to correspond to the point set transformed from the masked depth image. For object templates, the pixels within the masks across views are finally aggregated, with the surface point of per pixel known from the renderer. Both point sets of the proposal and the target object are normalized to fit a unit sphere by dividing by the object scale, effectively addressing the variations in object scales.\nWe use two views of object templates for training, and 42 views for evaluation as CNOS [40], which is the standard setting for the results reported in this paper." }, { "figure_ref": [], "heading": "B.1.2 Coarse Point Matching", "publication_ref": [ "b47", "b47" ], "table_ref": [], "text": "In the Coarse Point Matching module, we utilize T c Geometric Transformers [48] to model the relationships between the sparse point set \nP c m ∈ R N c m ×3 of\nP c = M c m • ( Ãc [1 :, 1 :]) γ • M cT o ,(10)\nwhere γ is used to sharpen the probabilities and set as 1.5.\nThe probabilities of points that have no correspondence, whether in P c m or P c o , are all set to 0. Following this, the probabilities P c are normalized to ensure their sum equals 1, and act as weights used to randomly select 6,000 triplets of point pairs from the total pool of N c m × N c o pairs. Each triplet, which consists of three point pairs, is utilized to calculate a pose using SVD, along with a distance between the point pairs based on the computed pose. Through this procedure, a total of 6,000 pose hypotheses are generated, and to minimize computational cost, only the 300 poses with the smallest point pair distances are selected. Finally, the initial pose for the Fine Point Matching module is determined from these 300 poses, with the pose that has the highest pose matching score being selected.\nIn the Coarse Point Matching module, we set T c = 3 and N c m = N c o = 196, with all the feature channels designated as 256. The configurations of the Geometric Transformers adhere to those used in [48]." }, { "figure_ref": [ "fig_6", "fig_7" ], "heading": "B.1.3 Fine Point Matching", "publication_ref": [ "b47", "b11", "b47", "b11" ], "table_ref": [], "text": "In the Fine Point Matching module, we utilize T f Sparseto-Dense Point Transformers to model the relationships between the dense point set P f m ∈ R N f m ×3 of the observed object proposal m and the set P f o ∈ R N f o ×3 of the target object O. Their respective features F f m and F f o are thus improved to their enhanced versions F f m and F f o . Each of these enhanced feature maps also includes the background token. An additional fully-connected layer is applied to the features both before and after the transformers. We use the upper script 'f ' to indicate variables associated with the Fine Point Matching module, and the lower scripts 'm' and 'o' to distinguish between the proposal and the object.\nDifferent from the coarse module, we condition both fea-tures F f m and F f o before applying them to the transformers by adding their respective positional encodings, which are learned via a multi-scale Set Abstract Level [47] from P f m transformed by the initial pose and P f o without transformation, respectively. The used architecture for positional encoding learning is illustrated in Fig. 8. For more details, one can refer to [47].\nAnother difference from the coarse module is the type of transformers used. To handle dense relationships, we design the Sparse-to-Dense Point Transformers, which utilize Geometric Transformers [48] to process sparse point sets and disseminate information to dense point sets via Linear Cross-attention layers [12,24]. The configurations of the Geometric Transformers adhere to those used in [48]; the point numbers of the sampled sparse point sets are all set as 196. The Linear Cross-attention layer enables attention along the feature dimension, and details of its architecture can be found in Fig. 9; for more details, one can refer to [12,24].\nDuring inference, similar to the coarse module, we compute the soft assignment matrix Ãf ∈ R (N f m +1)×(N f o +1) , and obtain two binary-value matrices\nM f m ∈ R N f m ×1 and M f o ∈ R N f o ×1\n. We then formulate the probabilities P f ∈ R N f m ×N f o as follows:\nP f = M f m • ( Ãf [1 :, 1 :]) • M f T o .(11)\nBased on P f , we search for the best-matched point in P f o for each point in P f m , assigned with the matching probability. The final object pose is then calculated using a weighted SVD, with the matching probabilities of the point pairs serving as the weights.\nBesides, we set T f = 3 and N f m = N f o = 2, 048, with all the feature channels designated as 256. During training, we follow [28] to obtain the initial object poses by augmenting the ground truth ones with random noises." }, { "figure_ref": [], "heading": "B.2. Training Objectives", "publication_ref": [ "b42" ], "table_ref": [], "text": "We use InfoNCE loss [43] to supervise the learning of attention matrices for both coarse and fine modules. Specifically, given two point sets P m ∈ R Nm×3 and P o ∈ R No×3 , along with their enhanced features Fm and Fo , which are \nwhere CE(•, •) denotes the cross-entropy loss function.\nŶm ∈ R Nm and Ŷo ∈ R No denote the ground truths for P m and P o . Given the ground truth pose R and t, each element y m in Ŷm , corresponding to the point p m in P m , could be obtained as follows:\ny m = 0 if d k * ≥ δ dis k * if d k * < δ dis ,(13)\nwhere \nk * =\nwhere for the loss L in Eq. ( 12), we use the upper scripts 'c' and 'f ' to distinguish between the losses in the coarse and fine point matching modules, respectively, while the lower script 'l' denotes the sequence of the transformer blocks in each module." }, { "figure_ref": [], "heading": "B.3. More Quantitative Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "B.3.1 Effects of The View Number of Templates", "publication_ref": [], "table_ref": [], "text": "We present a comparison of results using different views of object templates in " }, { "figure_ref": [], "heading": "B.3.2 Comparisons with OVE6D", "publication_ref": [ "b0" ], "table_ref": [ "tab_3" ], "text": "OVE6D [1] is a classical method for zero-shot pose estimation based on image matching, which first constructs a codebook from the object templates for viewpoint rotation retrieval and subsequently regresses the in-plane rotation. When comparing our SAM-6D with OVE6D using their provided segmentation masks (as shown in Table 12), SAM-6D outperforms OVE6D on LM-O dataset, without the need for using Iterative Closest Point (ICP) algorithm for post-optimization." }, { "figure_ref": [], "heading": "Method LM-O", "publication_ref": [ "b0", "b0", "b0", "b0" ], "table_ref": [ "tab_3" ], "text": "OVE6D [1] 56.1 OVE6D with ICP [1] 72.8 SAM-6D (Ours) 74.7\nTable 12. Quantitative results of OVE6D [1] and our SAM-6D on LM-O dataset. The evaluation metric is the standard ADD(-S) for pose estimation. SAM-6D is evaluated with the same masks provided by [1]." }, { "figure_ref": [ "fig_9" ], "heading": "B.4. More Qualitative Comparisons with Existing Methods", "publication_ref": [], "table_ref": [], "text": "To illustrate the advantages of our Pose Estimation Model (ISM), we visualize in Fig. 10 the qualitative comparisons with MegaPose [28] on all the seven core datasets of the BOP benchmark [54] for pose estimation of novel objects.\nFor reference, we also present the corresponding ground truths, barring those for the ITODD and HB datasets, as these are unavailable. " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": ", r vis is calculated as the ratio of patches in T best that can find a corresponding patch in I m , estimating the occlusion degree of O in I m . We can formulate the calculation of visible ratio r vis as follows:\nwhere\nThe constant threshold δ vis is empirically set as 0.5 to determine whether the patches in T best are occluded." }, { "figure_ref": [], "heading": "A.2. Template Selection for Object Matching", "publication_ref": [], "table_ref": [], "text": "For each given target object, we follow [40] to first sample 42 well-distributed viewpoints defined by the icosphere primitive of Blender. Corresponding to these viewpoints, we select 42 fully visible object templates from the Physically-based Rendering (PBR) training images of the BOP benchmark [54] by cropping regions and masking backgrounds using the ground truth object bounding boxes and masks, respectively. These cropped and masked images then serve as the templates of the target object, which are used to calculate the object matching scores for all generated proposals. It's noted that these 42 templates can also be directly rendered using the pre-defined viewpoints." }, { "figure_ref": [], "heading": "A.3. Hyperparameter Settings", "publication_ref": [ "b43" ], "table_ref": [], "text": "In the paper, we use SAM [26] based on ViT-H or Fast-SAM based on YOLOv8x as the segmentation model, and ViT-L of DINOv2 [44] as the description model. We utilize the publicly available codes for autonomous segmentation from SAM and FastSAM, with the hyperparameter settings displayed in Table 7." }, { "figure_ref": [], "heading": "A.4. More Quantitative Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.4.1 Detection Results", "publication_ref": [ "b4" ], "table_ref": [], "text": "We compare our Instance Segmentation Model (ISM) with ZeroPose [5] and CNOS [40] in terms of 2D object detection in Table 8, where our ISM outperforms both methods owing to the meticulously crafted design of object matching score." }, { "figure_ref": [], "heading": "A.4.2 Effects of Model Sizes", "publication_ref": [], "table_ref": [], "text": "We draw a comparison across different model sizes for both segmentation and description models on YCB-V dataset in Table 9, which indicates a positive correlation between larger model sizes and higher performance for both models. A.5. More Qualitative Results" }, { "figure_ref": [], "heading": "A.5.1 Qualitative Comparisons on Appearance", "publication_ref": [], "table_ref": [], "text": "Matching Score We visualize the qualitative comparisons of the appearance matching score s appe in Fig. 4 to show its advantages in scoring the proposals w.r.t. a given object in terms of appearance." }, { "figure_ref": [], "heading": "A.5.2 Qualitative Comparisons on Geometric Matching Score", "publication_ref": [], "table_ref": [], "text": "We visualize the qualitative comparisons of the geometric matching score s geo in Fig. 5 to show its advantages in scoring the proposals w.r.t. a given object in terms of geometry, e.g., object shapes and sizes. " }, { "figure_ref": [], "heading": "A.5.3 More Qualitative Comparisons with Existing Methods", "publication_ref": [], "table_ref": [], "text": "To illustrate the advantages of our Instance Segmentation Model (ISM), we visualize in Fig. 6 the qualitative comparisons with CNOS [40] on all the seven core datasets of the BOP benchmark [54] for instance segmentation of novel objects. For reference, we also provide the ground truth masks, except for the ITODD and HB datasets, as their ground truths are not available. " }, { "figure_ref": [], "heading": "B. Supplementary Material for Pose Estimation Model", "publication_ref": [], "table_ref": [], "text": "" } ]
ultimately yielding the pose estimates. Without bells and whistles, SAM-6D outperforms the existing methods on the seven core datasets of the BOP Benchmark for both instance segmentation and pose estimation of novel objects.
SAM-6D: Segment Anything Model Meets Zero-Shot 6D Object Pose Estimation
[ { "figure_caption": "Figure 1 .1Figure 1. We present SAM-6D for zero-shot 6D object pose estimation. SAM-6D takes an RGB image (a) and a depth map (b) of a cluttered scene as inputs, and performs instance segmentation (d) and pose estimation (e) for novel objects (c). We present the qualitative results of SAM-6D on the seven core datasets of the BOP benchmark [54], including YCB-V, LM-O, HB, T-LESS, IC-BIN, ITODD and TUD-L, arranged from left to right. Best view in the electronic version.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. An overview of our proposed SAM-6D, which consists of an Instance Segmentation Model (ISM) and a Pose Estimation Model (PEM) for joint instance segmentation and pose estimation of novel objects in RGB-D images. ISM leverages the Segment Anything Model (SAM) [26] to generate all possible proposals and selectively retains valid ones based on object matching scores. PEM involves two stages of point matching, from coarse to fine, to establish 3D-3D correspondence and calculate object poses for all valid proposals. Best view in the electronic version.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3. An illustration of Pose Estimation Model (PEM) of SAM-6D. and that of O as P o ∈ R No×3 with N o points, the goal is to solve an assignment matrix to present the partial-to-partial correspondence between P m and P o . Partial-to-partial correspondence arises as P o only partially matches P m due to occlusions, and P m may partially align with P o due to segmentation inaccuracies and sensor noises. We propose to equip their respective point features F m ∈ R Nm×C and F o ∈ R No×C with learnable Background Tokens, denoted as f bg m ∈ R C and f bg o ∈ R C, where C is the number of feature channels. This simple design resolves the assignment problem of non-overlapped points in two point sets, and the partial-to-partial correspondence thus could be effectively built based on feature similarities. Specifically, we can first compute the attention matrix A as follows:", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Softmax row () and Softmax col () denote Softmax operations executed along the row and column of the matrix, respectively. τ is a constant temperature. The values in each row of Ã, excluding the first row associated with the background, indicate the matching probabilities of the point p m ∈ P m aligning with background and the points in P o . Specifically, for p m , its corresponding point p o ∈ P o can be identified by locating the index of the maximum score ã ∈ à along the row; if this index equals zero, the embedding of p m aligns with the background token, indicating it has no valid correspondence in P o . Once à is obtained, we can gather all the matched pairs {(p m , p o )}, along with their scores {ã}, to compute the pose using weighted SVD. Building on the above strategy with background tokens, PEM is designed in two point matching stages. For the proposal m and the target object O, the first stage involves Coarse Point Matching between their sparse point sets P c m and P c o , while the second stage involves Fine Point Matching between their dense sets P f m and P f o ; we use the upper scripts 'c' and 'f' to indicate respective variables of these two stages. The aim of the first stage is to derive a coarse pose R init and t init from sparse correspondence. Then in the second stage, we use the initial pose to transform P f m for learning the positional encodings, and employ stacked Sparse-to-Dense Point Transformers to learn dense correspondence for a final pose R and t. Prior to two point matching modules, we incorporate a Feature Extraction module to learn individual point features of m and O. Fig. 3 gives a detailed illustration of PEM.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "the observed object proposal m and the set P c o ∈ R N c o ×3 of the target object O. Their respective features F c m and F c o are thus improved to their enhanced versions F c m and F c o . Each of these enhanced feature maps also includes the background token. An additional fully-connected layer is applied to the features both before and after the transformers. In this paper, we use the upper script 'c' to indicate variables associated with the Coarse Point Matching module, and the lower scripts 'm' and 'o' to distinguish between the proposal and the object. During inference, we compute the soft assignment matrix Ãc ∈ R (N c m +1)×(N c o +1) , and obtain two binary-value matrices M c m ∈ R N c m ×1 and M c o ∈ R N c o ×1 , denoting whether the points in P c m and P c o correspond to the background, owing to the design of background tokens; '0' indicates correspondence to the background, while '1' indicates otherwise. We then have the probabilities P c ∈ R N c m ×N c o to indicate the matching degree of the N c m × N c o point pairs between P c m and P c o , formulated as follows:", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 .Figure 7 .67Figure 6. Qualitative results on the seven core datasets of the BOP benchmark [54] for instance segmentation of novel objects.", "figure_data": "", "figure_id": "fig_5", "figure_label": "67", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. An illustration of the positional encoding for a point set with N points within the Fine Point Matching Module of the Pose Estimation Model.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 .9Figure 9. Left: The structure of Linear Cross-attention layer. Right: The structure of Linear Cross-attention.", "figure_data": "", "figure_id": "fig_7", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "We employ the objective ( 12 )12upon all the transformer blocks of both coarse and fine point matching modules, and thus optimize the Pose Estimation Model by solving the following problem: min l=1,...,Tc L c l + l=1,...,T f L f l .", "figure_data": "", "figure_id": "fig_8", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 10 .10Figure 10. Qualitative results on the seven core datasets of the BOP benchmark [54] for pose estimation of novel objects.", "figure_data": "", "figure_id": "fig_9", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "54]. Instance segmentation results of different methods on the seven core datasets of the BOP benchmark[54]. We report the mean Average Precision (mAP) scores at different Intersection-over-Union (IoU) values ranging from 0.50 to 0.95 with a step size of 0.05.", "figure_data": "MethodSegmentation Object Matching Score Model s sem s appe s geoBOP Dataset LM-O T-LESS TUD-L IC-BIN ITODDHBYCB-VMeanZeroPose [5]SAM [26]---34.432.741.425.122.447.851.936.5CNOS [40]FastSAM [74]---39.737.448.027.025.451.159.941.2CNOS [40]SAM [26]---39.639.739.128.428.248.059.540.4✓××39.537.648.725.725.351.260.241.2FastSAM [74]✓ ✓✓ ×× ✓40.6 40.439.3 41.450.1 49.727.7 28.229.0 30.152.2 54.060.6 61.142.8 43.6SAM-6D (Ours)✓ ✓✓ ×✓ ×42.2 43.442.0 39.151.7 48.229.3 33.331.9 28.854.8 55.162.1 60.344.9 44.0SAM [26]✓ ✓✓ ×× ✓44.4 44.040.8 44.749.8 54.834.5 33.830.0 31.555.7 58.359.5 59.945.0 46.7✓✓✓46.045.156.935.733.259.360.548.1MethodInput Type Detection / SegmentationBOP Dataset LM-O T-LESS TUD-L IC-BIN ITODDHBYCB-VMeanWith Supervised Detection / SegmentationMegaPose [28]RGB18.719.720.515.38.0018.613.916.2MegaPose † [28]RGB53.762.258.443.630.172.960.454.5MegaPose † [28] ZeroPose [5]RGB-D RGB-DMaskRCNN [16]58.3 26.154.3 24.371.2 61.137.1 24.740.4 26.475.7 38.263.3 29.557.2 32.6ZeroPose † [5]RGB-D56.253.387.241.843.668.258.458.4SAM-6D (Ours)RGB-D66.566.080.961.931.981.879.666.9With Zero-Shot Detection / SegmentationZeroPose [5]RGB-D26.017.841.217.738.043.925.725.7ZeroPose † [5]RGB-DZeroPose [5]49.134.074.539.042.961.057.751.2SAM-6D (Ours)RGB-D63.543.080.251.848.469.179.262.2MegaPose * [28]RGB22.917.725.815.210.825.128.120.8MegaPose † * [28]RGB49.947.765.336.731.565.460.150.9MegaPose † * [28]RGB-D62.648.785.146.746.873.076.462.8ZeroPose † * [5]RGB-DCNOS (FastSAM) [40]53.840.083.539.252.165.365.357.0GigaPose [41]RGB29.927.330.223.118.834.829.027.6GigaPose † [41]RGB59.957.063.546.739.772.266.357.9SAM-6D (Ours)RGB-D65.147.982.549.756.273.881.565.3SAM-6D (Ours)RGB-DSAM-6D (FastSAM)66.748.582.951.057.273.683.466.2SAM-6D (Ours)RGB-DSAM-6D (SAM)69.951.590.458.860.277.684.570.4", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Pose estimation results of different methods on the seven core datasets of the BOP benchmark[54]", "figure_data": "", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Further enhancements to our baselines are achieved via the inclusion of appearance and geometry matching scores, i.e., s appe and s geo , as verified in Table", "figure_data": "andCNOS [40], both of which score the object proposals interms of semantics solely, for instance segmentation ofnovel objects. The quantitative results are presented in Ta-ble 1, demonstrating that our ISM, built on the publiclyavailable foundation models of SAM [26] / FastSAM [74]and ViT (pre-trained by DINOv2 [44]), delivers superior re-sults without the need for network re-training or finetuning.", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "the runtime averaged", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Quantitative results of Optimal Transport[48] and our design of Background Tokens in the Pose Estimation Model on YCB-V. The reported time is the average per-image processing time of pose estimation across the entire dataset on a server with a GeForce RTX 3090 GPU.", "figure_data": "Coarse Point Matching Fine Point MatchingAR✓×77.6×✓40.2✓✓84.5", "figure_id": "tab_6", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation studies on the the strategy of two point matching stages in the Pose Estimation Model on YCB-V.", "figure_data": "Transformer#PointARGeometric Transformer [48]19681.7Linear Transformer [24]204878.4Sparse-to-Dense Point Transformer 196 → 2048 84.5", "figure_id": "tab_7", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Quantitative comparisons among various types of transformers employed in the Fine Point Matching module of the Pose Estimation Model on YCB-V.", "figure_data": "Segmentation ModelTime (s) Instance Segmentation Pose Estimaiton AllFastSAM [74]0.450.981.43SAM [26]2.801.574.37", "figure_id": "tab_8", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Runtime of SAM-6D with different segmentation models. The reported time is the average per-image processing time across the seven core datasets of BOP benchmark on a server with a GeForce RTX 3090 GPU.", "figure_data": "", "figure_id": "tab_9", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Argmin k=1,...,Nm || R(p m -t) -p o,k || 2 , and d k * = || R(p m -t) -p o, k * || 2 . k * is the index of the closest point p o, k * in P o to p m , while d k * denotes the distance between p m and p o, k * in the object coordinate system. δ dis is a distance threshold determining whether the point p m has the correspondence in P o ; we set δ dis as a constant 0.15, since both P m and P o are normalized to a unit sphere. The elements in Ŷo are also generated in a similar way.", "figure_data": "", "figure_id": "tab_12", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Table 11. As shown in the table, results with only one template perform poorly as a single view cannot fully depict the entire object. With an increase in the number of views, performance improves. For consistency with our Instance Segmentation Model and CNOS [40], we utilize 42 views of templates as the default setting in the main paper. Pose estimation results with different view numbers of object templates on YCB-V. We report the mean Average Recall (AR) among VSD, MSSD and MSPD.", "figure_data": "# View1281642AR21.8 62.7 83.9 84.1 84.5", "figure_id": "tab_13", "figure_label": "11", "figure_type": "table" } ]
Jiehong Lin; Lihua Liu; Dekun Lu; Kui Jia
[ { "authors": "Dingding Cai; Janne Heikkilä; Esa Rahtu", "journal": "", "ref_id": "b0", "title": "Ove6d: Object viewpoint encoding for depth-based 6d object pose estimation", "year": "2022" }, { "authors": "Jun Cen; Yizheng Wu; Kewei Wang; Xingyi Li; Jingkang Yang; Yixuan Pei; Lingdong Kong; Ziwei Liu; Qifeng Chen", "journal": "", "ref_id": "b1", "title": "Sad: Segment any rgbd", "year": "2023" }, { "authors": "Jiazhong Cen; Zanwei Zhou; Jiemin Fang; Wei Shen; Lingxi Xie; Xiaopeng Zhang; Qi Tian", "journal": "", "ref_id": "b2", "title": "Segment anything in 3d with nerfs", "year": "2023" }, { "authors": "Thomas Angel X Chang; Leonidas Funkhouser; Pat Guibas; Qixing Hanrahan; Zimo Huang; Silvio Li; Manolis Savarese; Shuran Savva; Hao Song; Su", "journal": "", "ref_id": "b3", "title": "Shapenet: An information-rich 3d model repository", "year": "2015" }, { "authors": "Jianqiu Chen; Mingshan Sun; Tianpeng Bao; Rui Zhao; Liwei Wu; Zhenyu He", "journal": "", "ref_id": "b4", "title": "3d model-based zero-shot pose estimation pipeline", "year": "2023" }, { "authors": "Jiaqi Chen; Zeyu Yang; Li Zhang", "journal": "", "ref_id": "b5", "title": "Semantic segment anything", "year": "2023" }, { "authors": "Kai Chen; Qi Dou", "journal": "", "ref_id": "b6", "title": "Sgpa: Structure-guided prior adaptation for category-level 6d object pose estimation", "year": "2021" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b7", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Laura Downs; Anthony Francis; Nate Koenig; Brandon Kinman; Ryan Hickman; Krista Reymann; Thomas B Mchugh; Vincent Vanhoucke", "journal": "IEEE", "ref_id": "b8", "title": "Google scanned objects: A highquality dataset of 3d scanned household items", "year": "2022" }, { "authors": "Zhiwen Fan; Panwang Pan; Peihao Wang; Yifan Jiang; Dejia Xu; Hanwen Jiang; Zhangyang Wang", "journal": "", "ref_id": "b9", "title": "Pope: 6-dof promptable pose estimation of any object", "year": "2023" }, { "authors": "Walter Goodwin; Sagar Vaze; Ioannis Havoutis; Ingmar Posner", "journal": "Springer", "ref_id": "b10", "title": "Zero-shot category-level object pose estimation", "year": "2022" }, { "authors": "Dongchen Han; Xuran Pan; Yizeng Han; Shiji Song; Gao Huang", "journal": "", "ref_id": "b11", "title": "Flatten transformer: Vision transformer using focused linear attention", "year": "2023" }, { "authors": "Dongsheng Han; Chaoning Zhang; Yu Qiao; Maryam Qamar; Yuna Jung; Seungkyu Lee; Sung-Ho Bae; Choong Seon; Hong ", "journal": "", "ref_id": "b12", "title": "Segment anything model (sam) meets glass: Mirror and transparent objects cannot be easily detected", "year": "2023" }, { "authors": "Laurvig Rasmus; Anders Haugaard; Buch Glent", "journal": "", "ref_id": "b13", "title": "Surfemb: Dense and continuous correspondence distributions for object pose estimation with learnt surface embeddings", "year": "2022" }, { "authors": "Haibin He; Jing Zhang; Mengyang Xu; Juhua Liu; Bo Du; Dacheng Tao", "journal": "", "ref_id": "b14", "title": "Scalable mask annotation for video text spotting", "year": "" }, { "authors": "Kaiming He; Georgia Gkioxari; Piotr Dollár; Ross Girshick", "journal": "", "ref_id": "b15", "title": "Mask r-cnn", "year": "2017" }, { "authors": "Xingyi He; Jiaming Sun; Yuang Wang; Di Huang; Hujun Bao; Xiaowei Zhou", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b16", "title": "Onepose++: Keypoint-free oneshot object pose estimation without cad models", "year": "2022" }, { "authors": "Yisheng He; Wei Sun; Haibin Huang; Jianran Liu; Haoqiang Fan; Jian Sun", "journal": "", "ref_id": "b17", "title": "Pvn3d: A deep point-wise 3d keypoints voting network for 6dof pose estimation", "year": "2020" }, { "authors": "Yisheng He; Haibin Huang; Haoqiang Fan; Qifeng Chen; Jian Sun", "journal": "", "ref_id": "b18", "title": "Ffb6d: A full flow bidirectional fusion network for 6d pose estimation", "year": "2021" }, { "authors": "Yisheng He; Yao Wang; Haoqiang Fan; Jian Sun; Qifeng Chen", "journal": "", "ref_id": "b19", "title": "Fs6d: Few-shot 6d pose estimation of novel objects", "year": "2022" }, { "authors": "Shengyu Huang; Zan Gojcic; Mikhail Usvyatsov; Andreas Wieser; Konrad Schindler", "journal": "", "ref_id": "b20", "title": "Predator: Registration of 3d point clouds with low overlap", "year": "2021" }, { "authors": "Ge-Peng Ji; Deng-Ping Fan; Peng Xu; Ming-Ming Cheng; Bowen Zhou; Luc Van Gool", "journal": "", "ref_id": "b21", "title": "Sam struggles in concealed scenes-empirical study on\" segment anything", "year": "2023" }, { "authors": "Wei Ji; Jingjing Li; Qi Bi; Wenbo Li; Li Cheng", "journal": "", "ref_id": "b22", "title": "Segment anything is not always perfect: An investigation of sam on different real-world applications", "year": "2023" }, { "authors": "Angelos Katharopoulos; Apoorv Vyas; Nikolaos Pappas; Franc ¸ois; Fleuret ", "journal": "PMLR", "ref_id": "b23", "title": "Transformers are rnns: Fast autoregressive transformers with linear attention", "year": "2020" }, { "authors": "Tong Ke; I Stergios; Roumeliotis", "journal": "", "ref_id": "b24", "title": "An efficient algebraic solution to the perspective-three-point problem", "year": "2017" }, { "authors": "Alexander Kirillov; Eric Mintun; Nikhila Ravi; Hanzi Mao; Chloe Rolland; Laura Gustafson; Tete Xiao; Spencer Whitehead; Alexander C Berg; Wan-Yen Lo", "journal": "", "ref_id": "b25", "title": "Segment anything", "year": "2023" }, { "authors": "Yann Labbé; Justin Carpentier; Aubry Mathieu; Josef Sivic", "journal": "Springer", "ref_id": "b26", "title": "Cosypose: Consistent multi-view multi-object 6d pose estimation", "year": "2020" }, { "authors": "Yann Labbé; Lucas Manuelli; Arsalan Mousavian; Stephen Tyree; Stan Birchfield; Jonathan Tremblay; Justin Carpentier; Mathieu Aubry; Dieter Fox; Josef Sivic", "journal": "", "ref_id": "b27", "title": "Megapose: 6d pose estimation of novel objects via render & compare", "year": "2022" }, { "authors": "Jiehong Lin; Hongyang Li; Ke Chen; Jiangbo Lu; Kui Jia", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b28", "title": "Sparse steerable convolutions: An efficient learning of se (3)-equivariant features for estimation and tracking of object poses in 3d space", "year": "2021" }, { "authors": "Jiehong Lin; Zewei Wei; Zhihao Li; Songcen Xu; Kui Jia; Yuanqing Li", "journal": "", "ref_id": "b29", "title": "Dualposenet: Category-level 6d object pose and size estimation using dual pose network with refined learning of pose consistency", "year": "2021" }, { "authors": "Jiehong Lin; Zewei Wei; Changxing Ding; Kui Jia", "journal": "Springer", "ref_id": "b30", "title": "Category-level 6d object pose and size estimation using selfsupervised deep prior deformation networks", "year": "2022" }, { "authors": "Jiehong Lin; Zewei Wei; Yabin Zhang; Kui Jia", "journal": "", "ref_id": "b31", "title": "Vi-net: Boosting category-level 6d object pose estimation via learning decoupled rotations on the spherical representations", "year": "2023" }, { "authors": "Yuan Liu; Yilin Wen; Sida Peng; Cheng Lin; Xiaoxiao Long; Taku Komura; Wenping Wang", "journal": "Springer", "ref_id": "b32", "title": "Gen6d: Generalizable model-free 6-dof object pose estimation from rgb images", "year": "2022" }, { "authors": "Yang Liu; Muzhi Zhu; Hengtao Li; Hao Chen; Xinlong Wang; Chunhua Shen", "journal": "", "ref_id": "b33", "title": "Matcher: Segment anything with one shot using all-purpose feature matching", "year": "2023" }, { "authors": "Zhaoyang Liu; Yinan He; Wenhai Wang; Weiyun Wang; Yi Wang; Shoufa Chen; Qinglong Zhang; Yang Yang; Qingyun Li; Jiashuo Yu", "journal": "", "ref_id": "b34", "title": "Internchat: Solving vision-centric tasks by interacting with chatbots beyond language", "year": "2023" }, { "authors": "Jun Ma; Bo Wang", "journal": "", "ref_id": "b35", "title": "Segment anything in medical images", "year": "2023" }, { "authors": "Haoyu Maciej A Mazurowski; Hanxue Dong; Jichen Gu; Nicholas Yang; Yixin Konz; Zhang", "journal": "Medical Image Analysis", "ref_id": "b36", "title": "Segment anything model for medical image analysis: an experimental study", "year": "2023" }, { "authors": "Yinlin Van Nguyen Nguyen; Yang Hu; Mathieu Xiao; Vincent Salzmann; Lepetit", "journal": "", "ref_id": "b37", "title": "Templates for 3d object pose estimation revisited: Generalization to new objects and robustness to occlusions", "year": "2022" }, { "authors": "Thibault Van Nguyen Nguyen; Yinlin Groueix; Mathieu Hu; Vincent Salzmann; Lepetit", "journal": "", "ref_id": "b38", "title": "Nope: Novel object pose estimation from a single image", "year": "2023" }, { "authors": "Thibault Van Nguyen Nguyen; Georgy Groueix; Vincent Ponimatkin; Tomas Lepetit; Hodan", "journal": "", "ref_id": "b39", "title": "Cnos: A strong baseline for cad-based novel object segmentation", "year": "2023" }, { "authors": "Thibault Van Nguyen Nguyen; Mathieu Groueix; Vincent Salzmann; Lepetit", "journal": "", "ref_id": "b40", "title": "Gigapose: Fast and robust novel object pose estimation via one correspondence", "year": "2024" }, { "authors": "Brian Okorn; Qiao Gu; Martial Hebert; David Held", "journal": "IEEE", "ref_id": "b41", "title": "Zephyr: Zero-shot pose hypothesis rating", "year": "2021" }, { "authors": "Aaron Van Den Oord; Yazhe Li; Oriol Vinyals", "journal": "", "ref_id": "b42", "title": "Representation learning with contrastive predictive coding", "year": "2018" }, { "authors": "Maxime Oquab; Timothée Darcet; Théo Moutakanni; Huy Vo; Marc Szafraniec; Vasil Khalidov; Pierre Fernandez; Daniel Haziza; Francisco Massa; Alaaeldin El-Nouby", "journal": "", "ref_id": "b43", "title": "Dinov2: Learning robust visual features without supervision", "year": "2023" }, { "authors": "Maxime Oquab; Timothée Darcet; Théo Moutakanni; Huy Vo; Marc Szafraniec; Vasil Khalidov; Pierre Fernandez; Daniel Haziza; Francisco Massa; Alaaeldin El-Nouby", "journal": "", "ref_id": "b44", "title": "Dinov2: Learning robust visual features without supervision", "year": "" }, { "authors": "Panwang Pan; Zhiwen Fan; Peihao Brandon Y Feng; Chenxin Wang; Zhangyang Li; Wang", "journal": "", "ref_id": "b45", "title": "Learning to estimate 6dof pose from limited data: A few-shot, generalizable approach using rgb images", "year": "2023" }, { "authors": "Charles Ruizhongtai; Qi ; Li Yi; Hao Su; Leonidas J Guibas", "journal": "Advances in neural information processing systems", "ref_id": "b46", "title": "Pointnet++: Deep hierarchical feature learning on point sets in a metric space", "year": "2017" }, { "authors": "Zheng Qin; Hao Yu; Changjian Wang; Yulan Guo; Yuxing Peng; Kai Xu", "journal": "", "ref_id": "b47", "title": "Geometric transformer for fast and robust point cloud registration", "year": "2022" }, { "authors": "Qiuhong Shen; Xingyi Yang; Xinchao Wang", "journal": "", "ref_id": "b48", "title": "Anything-3d: Towards single-view anything reconstruction in the wild", "year": "2023" }, { "authors": "Ivan Shugurov; Fu Li; Benjamin Busam; Slobodan Ilic", "journal": "", "ref_id": "b49", "title": "Osop: A multi-stage one shot object pose estimation framework", "year": "2022" }, { "authors": "Yongzhi Su; Mahdi Saleh; Torben Fetzer; Jason Rambach; Nassir Navab; Benjamin Busam; Didier Stricker; Federico Tombari", "journal": "", "ref_id": "b50", "title": "Zebrapose: Coarse to fine surface encoding for 6dof object pose estimation", "year": "2022" }, { "authors": "Jiaming Sun; Zehong Shen; Yuang Wang; Hujun Bao; Xiaowei Zhou", "journal": "", "ref_id": "b51", "title": "Loftr: Detector-free local feature matching with transformers", "year": "2021" }, { "authors": "Jiaming Sun; Zihao Wang; Siyu Zhang; Xingyi He; Hongcheng Zhao; Guofeng Zhang; Xiaowei Zhou", "journal": "", "ref_id": "b52", "title": "Onepose: One-shot object pose estimation without cad models", "year": "2022" }, { "authors": "Martin Sundermeyer; Tomáš Hodaň; Yann Labbe; Gu Wang; Eric Brachmann; Bertram Drost; Carsten Rother; Jiří Matas", "journal": "", "ref_id": "b53", "title": "Bop challenge 2022 on detection, segmentation and pose estimation of specific rigid objects", "year": "2023" }, { "authors": "Lv Tang; Haoke Xiao; Bo Li", "journal": "", "ref_id": "b54", "title": "Can sam segment anything? when sam meets camouflaged object detection", "year": "2023" }, { "authors": "Meng Tian; Marcelo H Ang; Hee Gim; Lee", "journal": "Springer", "ref_id": "b55", "title": "Shape prior deformation for categorical 6d object pose and size estimation", "year": "2020" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b56", "title": "Attention is all you need", "year": "2017" }, { "authors": "Chen Wang; Danfei Xu; Yuke Zhu; Roberto Martín-Martín; Cewu Lu; Li Fei-Fei; Silvio Savarese", "journal": "", "ref_id": "b57", "title": "Densefusion: 6d object pose estimation by iterative dense fusion", "year": "2019" }, { "authors": "Dongqing Wang; Tong Zhang; Alaa Abboud; Sabine Süsstrunk", "journal": "", "ref_id": "b58", "title": "Inpaintnerf360: Text-guided 3d inpainting on unbounded neural radiance fields", "year": "2023" }, { "authors": "Gu Wang; Fabian Manhardt; Federico Tombari; Xiangyang Ji", "journal": "", "ref_id": "b59", "title": "Gdr-net: Geometry-guided direct regression network for monocular 6d object pose estimation", "year": "2021" }, { "authors": "He Wang; Srinath Sridhar; Jingwei Huang; Julien Valentin; Shuran Song; Leonidas J Guibas", "journal": "", "ref_id": "b60", "title": "Normalized object coordinate space for category-level 6d object pose and size estimation", "year": "2019" }, { "authors": "Qian Wang; Biao Zhang; Michael Birsak; Peter Wonka", "journal": "", "ref_id": "b61", "title": "Instructedit: Improving automatic masks for diffusionbased image editing with user instructions", "year": "2023" }, { "authors": "Yu Xiang; Tanner Schmidt; Venkatraman Narayanan; Dieter Fox", "journal": "", "ref_id": "b62", "title": "Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes", "year": "" }, { "authors": "Defeng Xie; Ruichen Wang; Jian Ma; Chen Chen; Haonan Lu; Dong Yang; Fobo Shi; Xiaodong Lin", "journal": "", "ref_id": "b63", "title": "Edit everything: A text-guided generative system for images editing", "year": "2023" }, { "authors": "Jinyu Yang; Mingqi Gao; Zhe Li; Shang Gao; Fangjing Wang; Feng Zheng", "journal": "", "ref_id": "b64", "title": "Track anything: Segment anything meets videos", "year": "2023" }, { "authors": "Yunhan Yang; Xiaoyang Wu; Tong He; Hengshuang Zhao; Xihui Liu", "journal": "", "ref_id": "b65", "title": "Sam3d: Segment anything in 3d scenes", "year": "2023" }, { "authors": "Tao Yu; Runseng Feng; Ruoyu Feng; Jinming Liu; Xin Jin; Wenjun Zeng; Zhibo Chen", "journal": "", "ref_id": "b66", "title": "Inpaint anything: Segment anything meets image inpainting", "year": "2023" }, { "authors": "Chaoning Zhang; Dongshen Han; Yu Qiao; Jung Uk Kim; Sung-Ho Bae; Seungkyu Lee; Choong Seon; Hong ", "journal": "", "ref_id": "b67", "title": "Faster segment anything: Towards lightweight sam for mobile applications", "year": "2023" }, { "authors": "Chaoning Zhang; Sheng Zheng; Chenghao Li; Yu Qiao; Taegoo Kang; Xinru Shan; Chenshuang Zhang; Caiyan Qin; Francois Rameau; Sung-Ho Bae", "journal": "", "ref_id": "b68", "title": "A survey on segment anything model (sam): Vision foundation model meets prompt engineering", "year": "2023" }, { "authors": "Dingyuan Zhang; Dingkang Liang; Hongcheng Yang; Zhikang Zou; Xiaoqing Ye; Zhe Liu; Xiang Bai", "journal": "", "ref_id": "b69", "title": "Sam3d: Zero-shot 3d object detection via segment anything model", "year": "2023" }, { "authors": "Haojie Zhang; Yongyi Su; Xun Xu; Kui Jia", "journal": "", "ref_id": "b70", "title": "Improving the generalization of segmentation foundation model under distribution shift via weakly supervised adaptation", "year": "2023" }, { "authors": "Renrui Zhang; Zhengkai Jiang; Ziyu Guo; Shilin Yan; Junting Pan; Hao Dong; Peng Gao; Hongsheng Li", "journal": "", "ref_id": "b71", "title": "Personalize segment anything model with one shot", "year": "2023" }, { "authors": "Zhenghao Zhang; Zhichao Wei; Shengfan Zhang; Zuozhuo Dai; Siyu Zhu", "journal": "", "ref_id": "b72", "title": "Uvosam: A mask-free paradigm for unsupervised video object segmentation via segment anything model", "year": "2023" }, { "authors": "Xu Zhao; Wenchao Ding; Yongqi An; Yinglong Du; Tao Yu; Min Li; Ming Tang; Jinqiao Wang", "journal": "", "ref_id": "b73", "title": "Fast segment anything", "year": "2023" } ]
[ { "formula_coordinates": [ 4, 88.47, 501.4, 197.9, 9.81 ], "formula_id": "formula_0", "formula_text": "M, C = Ψ Mask (Φ Image (I), Φ Prompt (P r )),(1)" }, { "formula_coordinates": [ 4, 308.86, 85.54, 236.25, 32.49 ], "formula_id": "formula_1", "formula_text": "f cls T k and N patch T k patch embeddings {f patch T k ,i } N patch T k i=1" }, { "formula_coordinates": [ 4, 369.73, 165.69, 24.77, 17.69 ], "formula_id": "formula_2", "formula_text": "N patch Im j=1" }, { "formula_coordinates": [ 4, 308.86, 233.11, 120.6, 22.27 ], "formula_id": "formula_3", "formula_text": "values from { <f cls Im ,f cls T k > |f cls Im |•|f cls T k | } N T k=1" }, { "formula_coordinates": [ 4, 310.37, 344.35, 233.23, 45.74 ], "formula_id": "formula_4", "formula_text": "s appe = 1 N patch Im N patch Im j=1 max i=1,...,N patch T best < f patch Im,j , f patch T best ,i > |f patch Im,j | • |f patch T best ,i | . (2" }, { "formula_coordinates": [ 4, 390.9, 518.27, 154.21, 23.22 ], "formula_id": "formula_5", "formula_text": "s geo = B m B o B m B o .(3)" }, { "formula_coordinates": [ 4, 358.83, 606.59, 186.28, 23.23 ], "formula_id": "formula_6", "formula_text": "s m = s sem + s appe + r vis • s geo 1 + 1 + r vis . (4)" }, { "formula_coordinates": [ 5, 62.5, 530.41, 223.87, 12.69 ], "formula_id": "formula_7", "formula_text": "A = [f bg m , F m ] × [f bg o , F o ] T ∈ R (Nm+1)×(No+1) ,(5)" }, { "formula_coordinates": [ 5, 86.04, 551.33, 200.33, 33.7 ], "formula_id": "formula_8", "formula_text": "Ã Ã = Softmax row (A/τ ) • Softmax col (A/τ ),(6)" }, { "formula_coordinates": [ 6, 50.11, 201.39, 57.06, 13.84 ], "formula_id": "formula_9", "formula_text": "P c m ∈ R N c m ×3" }, { "formula_coordinates": [ 6, 233.63, 201.39, 52.24, 13.84 ], "formula_id": "formula_10", "formula_text": "P c o ∈ R N c o ×3" }, { "formula_coordinates": [ 6, 55.09, 391.18, 231.27, 23.74 ], "formula_id": "formula_11", "formula_text": "s hyp = N c m / p c m ∈P c m min p c o ∈P c o ||R T hyp (p c o -t hyp )-p c m || 2 .(7)" }, { "formula_coordinates": [ 6, 64.91, 520.06, 61.57, 13.84 ], "formula_id": "formula_12", "formula_text": "P f m ∈ R N f m ×3" }, { "formula_coordinates": [ 6, 50.11, 521.71, 236.25, 24.41 ], "formula_id": "formula_13", "formula_text": "P f o ∈ R N f o ×3" }, { "formula_coordinates": [ 6, 308.86, 508.02, 181.6, 12.19 ], "formula_id": "formula_14", "formula_text": "N c m = N c o = 196 and N f m = N f o = 2048" }, { "formula_coordinates": [ 14, 418.06, 368.04, 72.35, 13.84 ], "formula_id": "formula_15", "formula_text": "P c m ∈ R N c m ×3 of" }, { "formula_coordinates": [ 14, 356.76, 619.11, 188.35, 13.14 ], "formula_id": "formula_16", "formula_text": "P c = M c m • ( Ãc [1 :, 1 :]) γ • M cT o ,(10)" }, { "formula_coordinates": [ 16, 308.86, 463.03, 236.25, 27.38 ], "formula_id": "formula_17", "formula_text": "M f m ∈ R N f m ×1 and M f o ∈ R N f o ×1" }, { "formula_coordinates": [ 16, 357.65, 511.76, 187.46, 13.14 ], "formula_id": "formula_18", "formula_text": "P f = M f m • ( Ãf [1 :, 1 :]) • M f T o .(11)" }, { "formula_coordinates": [ 17, 111.67, 532.75, 174.69, 21.61 ], "formula_id": "formula_20", "formula_text": "y m = 0 if d k * ≥ δ dis k * if d k * < δ dis ,(13)" }, { "formula_coordinates": [ 17, 74.27, 581.83, 20.6, 10.81 ], "formula_id": "formula_21", "formula_text": "k * =" } ]
2023-11-27
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b6", "b5", "b7", "b2", "b19" ], "table_ref": [], "text": "Referring image segmentation (Hu, Rohrbach, and Darrell 2016) (RIS) is a fundamental and challenging multi-modal task, which involves both vision-language understanding (Li et al. 2022a) and instance segmentation (He et al. 2017). The target of RIS is to locate particular regions according to the given query in natural language. It has great potential in many applications, e.g., human-machine interaction and interactive image segmentation.\nExisting RIS methods (Hu et al. 2020;Ding et al. 2022;Liu, Ding, and Jiang 2023) introduce various fusion methods to obtain multi-modal features. Then, these features are sent into a mask decoder to predict the segmentation mask. The chair far from the cake.\nThe black chair next to the person in blue." }, { "figure_ref": [], "heading": "Rank of words:", "publication_ref": [], "table_ref": [], "text": "Girl in pink pants.\nThe boy wearing a cap.\nThe racket on the left." }, { "figure_ref": [ "fig_1" ], "heading": "Second Third First cap cap racket racket pants pants", "publication_ref": [ "b2", "b19", "b7", "b11" ], "table_ref": [], "text": "Figure 1: The illustration of Vision-Guided Attention (a) and Language-Guided Attention (b). For Vision-Guided Attention, we list the three most informative words for the image region symbolized by a red pentangle. For Language-Guided Attention, the most corresponding regions for each word, e.g., 'cap', 'pants', and 'racket', are denoted by different colors. Previous RIS methods only consider Vision-Guided Attention to fuse visual and linguistic features, but none of these methods introduce Language-Guided Attention to generate vision-aware linguistic features and explicitly use them in the mask decoder.\nDespite significant advancements in RIS, there are still several limitations. First, current methods (Ding et al. 2022;Liu, Ding, and Jiang 2023) utilize the unidirectional attention mechanism to fuse features from different modalities. However, they only consider the linguistic guidance for visual features but ignore the visual guidance for linguistic features. Unlike the unidirectional attention mechanism, BRINet (Hu et al. 2020) adopts both visual and linguistic guidance in a serial bidirectional way. Nevertheless, due to the serial manner, it only implicitly generates vision-aware linguistic features in the fusion model but does not explicitly use these features in the mask decoder. Second, existing methods use a mask decoder to generate the final segmen- tation mask from the multi-modal features. However, since multi-modal features are produced by integrating linguistic properties into visual features, they still contain a lot of visual properties. Without explicit linguistic guidance, the mask decoder focuses on the most visually salient entities but ignores linguistic consistency. Moreover, existing methods typically fine-tune the encoders to adapt them for the dataset of RIS. However, this strategy shrinks the generalization ability of encoders pre-trained on a large-scale dataset.\nIn this paper, we propose a novel method that utilizes the mutual-aware attention mechanism and transfers the knowledge of Segment Anything Model (SAM) (Kirillov et al. 2023) into RIS. First, we introduce the Mutual-Aware Attention block to bidirectionally model the relationship between visual and linguistic features. The Mutual-Aware Attention block consists of two parallel branches: Vision-Guided Attention and Language-Guided Attention. As shown in Fig. 1, Vision-Guided Attention assigns different weights to each word in the expression for each image region (such as the red pentangle) and produces language-aware visual features. Similarly, Language-Guided Attention explores the corresponding image region for the word, e.g., 'cap', 'pants', 'racket', and generates vision-aware linguistic features. We consider language-aware visual features and vision-aware linguistic features as the mutual-aware attention features of our method. Second, we design a Mask Decoder to enable explicit linguistic guidance. Specifically, we introduce a multi-modal query token to integrate visual and linguistic properties, which helps to segment the correct referring region. Finally, we freeze the image encoder of SAM to preserve its generalization ability. To transfer the knowledge of SAM into RIS, we introduce a Feature Enhancement module to integrate global and local visual features.\nWe demonstrate the results of our method and other methods in Fig. 2. To our knowledge, our work is the first to transfer the powerful knowledge of SAM into RIS. To be summarized, our contributions are listed as follows:\n• We propose a referring image segmentation method called MARIS, which leverages the powerful knowledge of SAM and uses the mutual-aware attention mechanism to model the relationship between visual and linguistic features bidirectionally.\n• We introduce a Mutual-Aware Attention block to produce language-aware visual features and vision-aware linguistic features by weighting each word of the sentence and each region of visual features.\n• We design a Mask Decoder to utilize explicit linguistic guidance and get a segmentation mask consistent with the language expression. Besides, we introduce a multi-modal query token to integrate visual and linguistic properties.\n• The proposed approach achieves new state-of-the-art performance on the three widely used RIS datasets, including RefCOCO, RefCOCO+, and G-Ref. Additionally, our method exhibits excellent generalization capabilities." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Referring Image Segmentation", "publication_ref": [ "b6", "b40", "b20", "b39", "b21", "b23", "b39", "b5", "b23", "b30", "b31", "b28", "b38", "b7", "b8", "b37", "b10", "b2", "b2", "b32", "b26" ], "table_ref": [], "text": "Referring image segmentation (Hu, Rohrbach, and Darrell 2016) aims to segment a particular region according to the natural language expression. Early approaches (Yu et al. 2016;Liu et al. 2017;Yu et al. 2018;Liu et al. 2019;Luo et al. 2020) concatenate visual and linguistic features to produce multi-modal features, which are fed into the fully convolutional network for segmentation generation. Yu et al. (Yu et al. 2018) proposed a two-stage method that first generates masks by Mask R-CNN (He et al. 2017), and then selected the target mask with linguistic prompt. Besides, MCN (Luo et al. 2020) presented a multi-task framework to jointly optimize two related tasks, i.e., referring expression comprehension and segmentation.\nAs the attention mechanism (Vaswani et al. 2017;Wang et al. 2018) achieved great success in various fields, it has been exploited in the field of RIS (Shi et al. 2018;Ye et al. 2019;Hu et al. 2020;Jing et al. 2021). Later, some methods (Yang et al. 2022;Kim et al. 2022;Ding et al. 2022) adopt the transformer-based architectures. VLT (Ding et al. 2022) introduces a Vision-Language Transformer to enhance deep interactions among multi-modal features. More recently, CRIS (Wang et al. 2022) utilized CLIP (Radford et al. 2021) \n× 2 1 2 1 2 i j i j 1 2 i j 1 2 i j Mask × 2\nMulti-modal Query Token\n1 2 i j 1 2 i j 1 2 i j 1 2 i j 1 2 i j 1 2 1 2 i j i j 1 2 i j 1 2 i j × 2\nMulti-modal Query Token " }, { "figure_ref": [], "heading": "Attention Mechanism", "publication_ref": [ "b15", "b41", "b13", "b12", "b12", "b36" ], "table_ref": [], "text": "Attention mechanism has been widely used in various multimodal tasks. In (Li et al. 2020;Zhou et al. 2020), Transformer schemes are used to exploit the long-range dependencies between visual features and linguistic features. Besides, the Transformer-decoder based architectures (Li et al. 2021(Li et al. , 2023) ) are also used to fuse the visual and linguistic features. For example, BLIP-2 (Li et al. 2023) builds a Qformer based on the cross-attention mechanism to assemble visual and linguistic information by a set of learned queries. Later, GLIP (Li et al. 2022a) introduces bidirectional crossattention to obtain multi-modal features. Yang et al. (Yang et al. 2023) build a Trajectory to Word attention for videolanguage tasks. In this paper, we propose a Mutual-Aware Attention scheme to generate language-aware visual features and vision-aware linguistic features, where the latter guides the former to generate an accurate mask in the Mask Decoder." }, { "figure_ref": [], "heading": "Powerful Foundation Models in Computer Vision", "publication_ref": [ "b3", "b22" ], "table_ref": [], "text": "Foundation models are trained on broad data and can be adapted (e.g., fine-tuned) to a wide range of downstream tasks. In recent years, some vision transformers (Dosovitskiy et al. 2020;Liu et al. 2021) " }, { "figure_ref": [ "fig_2" ], "heading": "Methodology", "publication_ref": [ "b11", "b26" ], "table_ref": [], "text": "The overall architecture of MARIS is shown in Fig. 3. Firstly, the input image and language expression are projected into the visual (F v1 , F v2 , F v3 ) and linguistic (F l ) feature spaces via a pre-trained image encoder (Kirillov et al. 2023) and a text encoder (Radford et al. 2021), respectively. Note that the parameters of the image and text encoder are frozen. Secondly, we design a Feature Enhancement (FE) module, which fuses features from different layers of the image encoder and obtains enhanced visual features (F v ). Thirdly, enhanced visual features (F v ) and linguistic features (F l ) are fed into the Mutual-Aware Attention (MA) block to obtain mutual-aware attention features. Finally, we introduce the Mask Decoder (DE) with a single multi-modal query token to utilize explicit linguistic guidance and produce a language-consistent mask. We will describe the details of these steps in the following subsections." }, { "figure_ref": [], "heading": "Image Encoder and Text Encoder", "publication_ref": [ "b11", "b26", "b29" ], "table_ref": [], "text": "The image encoder of SAM (Kirillov et al. 2023) \nF v3 ∈ R H×W ×C .\nHere, H and W are the height and width of the feature map, respectively, and C denotes the channel size of visual features.\nFor the language expression, we adopt a text encoder pretrained by (Radford et al. 2021) and obtain linguistic features\nF l ∈ R Lt×C ′ .\nHere, L t denotes the length of linguistic features. Accordingly, C ′ is the channel size of linguistic features.\nTo preserve the generalization capability of the image and text encoder and save computational resources, we freeze the parameters of these encoders, which also prevents catastrophic forgetting (Toneva et al. 2018)." }, { "figure_ref": [], "heading": "Feature Enhancement", "publication_ref": [], "table_ref": [], "text": "To generate an accurate segmentation mask for the causal language expression, it is necessary to focus on both global semantic information and local grained details. For the image encoder of SAM, features from the deep layer and shallow layer contain accurate global features and abundant local features, respectively. Based on this consideration, we first fuse the shallow layer feature F v1 and the middle layer feature F v2 as follows.\nF = CBA([MLP(F v1 ), MLP(F v2 )]),(1)\nwhere F denotes the early enhanced feature. CBA(•) is sequential operations, including convolution layers with 3 × 3 kernels, a batch-normalization layer, and GeLu activation function. MLP(•) represents the Multi-Layer Perceptron (MLP) layer. [•, •] is the concatenation operation. Subsequently, we fuse the early enhanced feature F and the deep layer feature F v3 to obtain the final enhanced visual feature,\nF v = CBA([MLP( F ), MLP(F v3 )]),(2)\nwhere F v ∈ R H×W ×C is the final enhanced visual feature.\nThen the feature map is flattened into a 2-D vector\nF v ∈ R Lv×C , where L v is equal to H × W ." }, { "figure_ref": [ "fig_3" ], "heading": "Mutual-Aware Attention", "publication_ref": [ "b10", "b2", "b7" ], "table_ref": [], "text": "After obtaining visual and linguistic features, the first step is to fuse these features. Existing methods (Kim et al. 2022;Ding et al. 2022) propose different strategies to get multimodal features. However, these methods only assign different weights to each word in the expression but treat each image region equally. BRINet (Hu et al. 2020) adopts a serial bidirectional design to utilize both visual and linguistic guidance. However, the serial design fails to utilize visionaware linguistic features explicitly. To address these issues, we propose the Mutual-Aware Attention block, which consists of two parallel branches. Specifically, the first branch is Vision-Guided Attention, which weights different words for each pixel of visual features. Accordingly, the second branch is Language-Guided Attention, which weights different image regions for each word of the sentence. The architecture of Mutual-Aware Attention is shown in Fig. 4. First, we model the correlation between linguistic features and visual features as follows,\n1 2 i j 1 2 i j 1 2 1 2 i j i j 1 2 i j 1 2 i j 1 2 1 2 i j i j 1 2 i j 1 2 i j 1 2 i j 1 2 i j Add & LayerNorm Add & LayerNorm (b) Language-Guided Attention (a) Vision-Guided Attention Attention Mask\nZ v = F v W v , Z l = F l W l , A = Softmax(Z v Z ⊤ l + M)(3)\nwhere A ∈ R Lv×Lt is the attention weight. W v and W l are learnable matrices of size C × C and C ′ × C, which aim to transform F v and F l into the same feature dimension. M is the attention mask, which is calculated by,\nM(i, j) = 0 ifM (i, j) < τ -∞ otherwise ,(4)\nwhere M = 1/(1 + e -ZvZ ⊤ l ) denotes the relevant scores between visual and linguistic features. τ is the threshold, and its value will be discussed in the supplementary material. Through the attention mask M, we alleviate the interference from irrelevant pairs in visual and linguistic features.\nAfter that, we obtain mutual-aware attention features, including language-aware visual features F lav ∈ R Lv×C and vision-aware linguistic features F val ∈ R Lt×C as follows,\nF lav =LayerNorm(AZ l + Z v ), F val =LayerNorm(A ⊤ Z v + Z l ).\n(5)\nwhere LayerNorm(•) denotes the Layer Normalization. We use two sequential Mutual-Aware Attention blocks in our implementation, and the ablation in terms of the number of Mutual-Aware Attention blocks will be discussed in the supplementary material." }, { "figure_ref": [], "heading": "Mask Decoder", "publication_ref": [ "b0", "b1" ], "table_ref": [], "text": "The mutual-aware attention features are fed into the mask decoder to obtain the final mask. Since the multi-modal features contain excessive visual properties, the mask decoder is likely to segment visual-dominant entities without explicit linguistic guidance. To enable explicit linguistic guidance, we build a Mask Decoder based on the mask classification framework (Carion et al. 2020;Cheng et al. 2022). Specifically, we only use a single multi-modal query token with random initialization. Different from DETR/Mask2former, we combine the multi-modal query token with vision-aware linguistic features as the input of the decoder. Such a design enables the multi-modal query token to integrate linguistic information and interact with visual features, thus getting a consistent segmentation with the language expression.\nTo this end, language-aware visual feature F lav is first fed into a multi-head self-attention layer to extract powerful contextual information.\nFlav = MHSA(F lav ) + F lav , (6) where MHSA(•) is the multi-head self-attention layer.\nThen, the multi-modal query token F m ∈ R 1×C along with F val ∈ R Lt×C are sent to a multi-head self-attention layer to aggregate the vision-aware linguistic feature. Let\nF c := [F m , F val ],\nThe aggregation is formulated as\nFc := [ Fm , Fval ] = MHSA(F c ) + F c ,(7)\nwhere Fc denotes the evolved feature concatenating the evolved versions of vision-aware linguistic feature Fval and multi-modal query token Fm . Subsequently, we perform interaction between Flav and Fc , obtaining the evolved language-aware visual feature F lav via a multi-head cross-attention layer as follows.\nF lav = MHCA( Flav , Fc , Fc ) + Flav .\n(8) where MHCA(•) is the multi-head cross-attention layer.\nThe next decoder block takes evolved language-aware visual feature F lav and evolved concatenated feature Fc from the previous layer as inputs.\nAfter that, the evolved language-aware visual feature F lav is upsampled by two sequential blocks. Each consists of a convolutional layer and an upsample operation. We extract the evolved multi-modal query token Fm from the evolved concatenated feature Fc , and send it to a MLP layer. Finally, we multiply the output of MLP with upsampled visual features to generate the segmentation mask." }, { "figure_ref": [], "heading": "Losses", "publication_ref": [ "b17" ], "table_ref": [], "text": "In the training process, we adopt the linear combination of focal loss (Lin et al. 2017) and dice loss (Milletari, Navab, and Ahmadi 2016) formulated as follows. \nL = L f + L d ,(" }, { "figure_ref": [], "heading": "Experiments Datasets", "publication_ref": [ "b9", "b24", "b18" ], "table_ref": [], "text": "We conduct experiments on three widely used datasets, including RefCOCO & RefCOCO+ (Kazemzadeh et al. 2014), and G-Ref (Mao et al. 2016). The images of these three datasets are from MSCOCO (Lin et al. 2014), but are annotated with language expressions with different styles. Expressions of RefCOCO/RefCOCO+ have an average length of 3.61/3.53. Compared with RefCOCO, expressions about absolute locations, e.g., left/right, are forbidden in Ref-COCO+. G-Ref has a longer average length (8.4 words). Following previous works, we evaluate both RefCOCO and RefCOCO+ in three subsets: validation, testA, and testB. For G-Ref, we leverage both partitions of UMD and Google for the evaluation." }, { "figure_ref": [], "heading": "Metrics", "publication_ref": [ "b2", "b19" ], "table_ref": [], "text": "Following previous works (Ding et al. 2022;Liu, Ding, and Jiang 2023), we utilize two metrics in our experiments, including mask Intersection-over-Union (IoU) score and Precision with thresholds (Pr@X). Specifically, IoU scores reveal the predicted mask quality by calculating intersection regions over union regions between the predicted mask and the ground truth across all testing samples. Besides, Pr@X denotes the ratio of predicted masks with IoU scores higher than the threshold X ∈ {70, 80, 90}. Implementation details are reported in the supplementary material. For example, Pr@70 denotes the location ability of the model, while Pr@90 shows the ability to generate a high-quality mask." }, { "figure_ref": [], "heading": "Comparison With State-of-the-art Methods", "publication_ref": [ "b35", "b19" ], "table_ref": [], "text": "We compare the proposed MARIS with previous stateof-the-art (SOTA) methods on the three most widely used benchmarks, i.e., RefCOCO, RefCOCO+, and G-Ref.\nQuantitative results are shown in Tab. 1.\nOur method achieves significant improvements over the second-best SOTA method, MCRES (Xu et al. 2023), on the RefCOCO dataset. Specifically, our method outperforms MCRES by 1.28%, 1.94%, and 1.00% on the val, testA, and testB split, respectively. These results demonstrate the effectiveness of our framework for the RIS task.\nOn the RefCOCO+ dataset, our MARIS improves over ReLA (Liu, Ding, and Jiang 2023) on the val and testA splits by 0.33% and 1.08%, respectively. However, we observe a slight performance drop of 0.32% on the testB split compared to ReLA. A possible reason is that the frozen text encoder gets sub-optimal linguistic feature representation for language expression without absolute locations. When the test set (i.e., testB split) contains images with multiple objects that are hard to be distinguished without absolute locations, our method exhibits inferior performance.\nFinally, on another more complex G-Ref dataset, our method achieves an IoU improvement of 0.48%, 0.38%, and 1.98% on the val (U), test (U), and val (G) split, respectively. This improvement indicates that our method is also competitive for long and causal language expressions. Besides, we also demonstrate the ratio of predicted masks with IoU " }, { "figure_ref": [ "fig_6", "fig_6", "fig_6", "fig_6", "fig_6", "fig_6" ], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "To verify the effectiveness of the proposed modules of our method, we conduct ablation studies to investigate each component According to Tab. 3, our MA outperforms TE, SDF, BCM by 1.35%, 1.12%, 1.00% IoU score, respectively. This is because existing methods only explore the informative words of each image region, while our method also provides the corresponding image regions of each word in the language We also provide some visualized examples in the supplementary material to show that our method generates a more accurate and high-quality mask than others. Finally, according to # 8, the attention mask alleviates the interference of irrelevant pairs between visual and linguistic features, which further improves the performance by 0.68% IoU. Besides, we visualize the output of Vision-Guided Attention and Language-Guided Attention in Fig. 5(a) and (b), respectively. For the red rectangle in Fig. 5(a1), we list attention weights of each word in Fig. 5(a3). Our model considers 'bottom' and 'blue' as the most informative words. Thus, our prediction mask accurately locates the bottom boy in blue, as shown in Fig. 5(a2). Similarly, for the word 'black', we show its attention map in Fig. 5(b3). In the Mask Decoder, the final segmentation mask is refined according to the attention map. Our prediction mask is shown in Fig. 5" }, { "figure_ref": [], "heading": "(b2).", "publication_ref": [ "b2" ], "table_ref": [], "text": "Mask Decoder According to Tab. 2, replacing the Mask Decoder with SAM's decoder reduces the IoU score by 1.23%. This reduction is caused by the token to image attn. (TOI-A) layer in SAM's decoder. Specifically, the TOI-A layer performs cross-attention by taking prompt tokens containing output tokens and linguistic features as queries (Q), visual features as keys (K) and vectors (V). Since these output tokens are initialized randomly, they make an uncertain adjustment to the evolved visual features and thus affect the performance. To verify the disadvantage of TOI-A layer for RIS, we insert this layer into each block of our decoder. As shown in # 9 of Tab. 4, the TOI-A layer leads to 0.80% IoU decrease. Besides, to verify the effectiveness of explicit linguistic guidance (ELP), we also implement the experiment without explicit linguistic guidance (# 10). Similar to VLT (Ding et al. 2022), we multiply F val with F lav to obtain the input feature of the Mask Decoder. # 10 in Tab. 4 indicates that explicit linguistic guidance improves the IoU performance by 1.78%, which demonstrates the effectiveness of the proposed decoder. Compared with removing Feature Enhancement (FE) module (# 1), multi-scale features generated from the last layer improve the performance by 2.36%. However, compared with using features from different layers, this baseline shrinks the IoU performance by 4.65%. The reason for performance degradation is that features from the last layer contain highly global information, and multi-scale features generated from the last layer exhibit a limited representation of grained details that are essential for RIS. " }, { "figure_ref": [ "fig_7" ], "heading": "Generalization Ability", "publication_ref": [ "b33" ], "table_ref": [], "text": "To demonstrate the generalization ability of our method, we conduct experiments on the test split of PhraseCut (Wu et al. 2020). PhraseCut contains 1287 categories, which is much more diverse than 80 categories in COCO. Thus, we compare with two previous methods (as their parameters are available online) on PhraseCut to evaluate their generalization ability.\nAs shown in Tab. 6, our method surpasses previous methods in terms of generalization ability. For example, when training on the RefCOCO dataset, our method exceeds CRIS and LAVT by 7.29% and 6.14%, respectively. This advantage comes from the frozen text encoder and image encoder and the introduction of Feature Enhancement. In contrast, encoders of other methods are trainable and thus might be biased to the fine-tuned dataset. We also provide some successful and failed visualized examples in Fig. 6. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper proposes a novel referring image segmentation method called MARIS, which effectively uses mutual-aware attention features and incorporates the powerful knowledge from SAM into RIS. Our model contains three components: the Feature Enhancement module, the Mutual-Aware Attention block, and a Mask Decoder. To be specific, the Feature Enhancement module incorporates global and local features to transfer the knowledge from the frozen image encoder of SAM. Subsequently, the Mutual-Aware Attention block produces language-aware visual features and visionaware linguistic features by weighting each word of the sentence and each region of visual features. Finally, we design a Mask Decoder to utilize explicit linguistic guidance. Specifically, we introduce the multi-modal query token to integrate visual and linguistic properties. Extensive experiments on three well-known benchmarks and PhraseCut demonstrate that MARIS achieves new state-of-the-art performance and great generalization ability." } ]
Referring image segmentation (RIS) aims to segment a particular region based on a language expression prompt. Existing methods incorporate linguistic features into visual features and obtain multi-modal features for mask decoding. However, these methods may segment the visually salient entity instead of the correct referring region, as the multi-modal features are dominated by the abundant visual context. In this paper, we propose MARIS, a referring image segmentation method that leverages the Segment Anything Model (SAM) and introduces a mutual-aware attention mechanism to enhance the cross-modal fusion via two parallel branches. Specifically, our mutual-aware attention mechanism consists of Vision-Guided Attention and Language-Guided Attention, which bidirectionally model the relationship between visual and linguistic features. Correspondingly, we design a Mask Decoder to enable explicit linguistic guidance for more consistent segmentation with the language expression. To this end, a multi-modal query token is proposed to integrate linguistic information and interact with visual information simultaneously. Extensive experiments on three benchmark datasets show that our method outperforms the state-of-theart RIS methods. Our code will be publicly available.
MARIS: Referring Image Segmentation via Mutual-Aware Attention Features
[ { "figure_caption": "left black back.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Segmentation masks generated by our method (c) and other methods, including directly using SAM (d), CRIS (Wang et al. 2022) (e), and ReLA (Liu, Ding, and Jiang 2023) (f). Directly using SAM means training the SAM decoder only. We provide more examples in the supplementary material.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: The overview of MARIS. For an input image, the image encoder extracts shallow/middle/deep visual features (F v1 , F v2 , F v3 ). For the language expression, the text encoder generates linguistic features (F l ). Then these features are sent into the Feature Enhancement module and obtain enhanced visual features (F v ). Subsequently, Mutual-Aware Attention (MA) blocks receive enhanced visual features and linguistic features as inputs to get mutual-aware attention features. After that, the Mask Decoder utilizes a multi-modal query token and mutual-aware attention features to get the final segmentation mask.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: The architecture of Mutual-Aware Attention block. The left part (a) is the Vision-Guided Attention branch, and the right part (b) is the Language-Guided Attention branch. A and A ⊤ denote the attention weights. ⊗ symbolizes the matmul product operation.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "9)where L f and L d are focal loss and dice loss, respectively.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": ", including Feature Enhancement (FE), Mutual-Aware Attention (MA), and Mask Decoder (DE) on the Re-fCOCO val dataset, as shown in Tab. 2. Note that we use the SAM's decoder (Kirillov et al. 2023) for the variant excluding the proposed decoder. Mutual-Aware Attention Blocks Mutual-Aware Attention blocks are introduced to weight different image regions and different words in the sentence. It brings an improvement by 1.88% in terms of IoU score. To verify the superiority of Mutual-Aware Attention, we conduct experiments that use other methods (Kim et al. 2022; Ding et al. 2022; Hu et al. 2020) to incorporate features of different modalities. Specifically, ReSTR (Kim et al. 2022) utilizes a transformer encoder (TE) to model the long-range dependencies. VLT (Ding et al. 2022) adopts the Spatial Dynamic Fusion (SDF) to produce different linguistic feature vectors, which is equivalent to using only Visual-Guided Attention. BRINet (Hu et al. 2020) introduces a serial bidirectional cross-modal (BCM) module to utilize visual and linguistic guidance.", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: The result of Vision-Guided Attention (a) and Language-Guided Attention (b).", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: The successful example (top) and failed example (bottom) in PhraseCut.", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "as the image and text encoder and transferred the", "figure_data": "Image Encoder of SAM Image Encoder Image EncoderFeature EnhancementSelf-AttentionqueryVision-Guided AttentionCross-AttentionIoU=91.20keyvectorLanguage-Guided AttentionStoplight thatMASelf-AttentionMLPis greenMask DecoderText Encoder Text Encoder", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": ", a VITbased backbone, takes images of size 1024 × 1024 as inputs and generates visual features of spatial size 64 × 64. In particular, SAM uses a VIT-H with 14 × 14 windows and four plain global attention blocks. For an input image, we utilize visual features from 2nd∼4th global attention blocks, which are defined as shallow layer features F v1 ∈ R H×W ×C , middle layer features F v2 ∈ R H×W ×C and deep layer features", "figure_data": "", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparisons with previous state-of-the-art methods in terms of IoU. U: UMD split; G: Google split.", "figure_data": "MethodsvalRefCOCO testAtestBvalRefCOCO+ testAG-Ref testB val (U) test (U) val (G)SAM (ICCV'23)57.23 58.71 56.34 44.95 50.35 41.3847.8749.3146.23ReSTR (CVPR'22)67.22 69.30 64.45 55.78 60.44 48.27--54.48CRIS (CVPR'22)70.47 73.18 66.10 62.27 68.08 53.6859.8760.36RefTR (NIPS'21)70.56 73.49 66.57 61.08 64.69 52.7358.7358.51LAVT (CVPR'22)72.73 75.82 68.79 62.14 68.38 55.1061.2462.0960.50VLT (TPAMI'22)72.96 75.96 69.60 63.53 68.43 56.9263.4966.2262.80ReLA (CVPR'23)73.82 76.48 70.18 66.04 71.02 57.6565.0065.9762.70MCRES (CVPR'23) 74.92 76.98 70.84 64.32 69.68 56.6463.5164.9061.63MARIS (Ours)76.20 78.92 71.84 66.37 72.10 57.3365.4866.6064.78MARIS (Pr@90)42.38 43.15 40.69 36.58 38.18 30.4229.9432.0031.29scores higher than 90%. According to the last row of Tab. 1,our method typically segments a high-quality mask.", "figure_id": "tab_4", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Performance comparison among each component of MARIS, including Feature Enhancement (FE), Mutual-Aware Attention (MA), and Mask Decoder (DE).", "figure_data": "Settings Methods Pr@70 Pr@80 Pr@90 IoU# 1w/o FE 71.92 64.51 37.84 69.19# 2w/o MA 77.28 69.03 39.94 74.32# 3w/o DE 78.82 69.69 40.04 74.97# 4Ours80.36 72.11 42.38 76.20", "figure_id": "tab_5", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The comparison between our MA and other methods, including TE in ReSTR, SDF in VLT, BCM in BRINet, and without attention mask.", "figure_data": "SettingsMethodsPr@70 Pr@80 Pr@90 IoU# 5TE (ReSTR) 78.62 70.17 39.77 74.85# 6SDF (VLT)78.91 70.25 40.22 75.08# 7BCM (BRINet) 79.17 70.54 40.56 75.20# 8w/o attn. mask 80.08 71.62 41.71 75.52# 4MA (Ours)80.36 72.11 42.38 76.20", "figure_id": "tab_6", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation study of the Mask Decoder.", "figure_data": "Settings Methods Pr@70 Pr@80 Pr@90 IoU# 9w/ TOI-A 79.21 71.07 40.65 75.40# 10w/o ELP 78.55 67.96 38.62 74.42# 4Ours80.36 72.11 42.38 76.20Feature Enhancement As shown in Tab. 5, the Fea-ture Enhancement module significantly improves the per-formance of MARIS by 7.01% IoU score. To understandFeature Enhancement comprehensively, we conduct the ex-periment by using another well-known backbone-adaptionbaseline. Specifically, we adopt VIT-DET (Li et al. 2022b)as the compared baseline, which uses only the feature mapfrom the last layer of the backbone to generate multi-scalefeatures. The quantitative evaluations are shown in Tab. 5.", "figure_id": "tab_7", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation study of the Feature Enhancement.", "figure_data": "Settings Methods Pr@70 Pr@80 Pr@90 IoU# 1w/o FE71.92 64.51 37.84 69.19# 11VIT-DET 74.74 66.79 37.85 71.55# 4Ours80.36 72.11 42.38 76.20", "figure_id": "tab_8", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Generalization ability of different methods.", "figure_data": "Training SetIoU results on PhraseCut CRIS LAVT OursRefCOCO15.5316.6822.82RefCOCO+16.3016.6421.68G-Ref16.2416.0522.47", "figure_id": "tab_9", "figure_label": "6", "figure_type": "table" } ]
Mengxi Zhang; Yiming Liu; Xiangjun Yin; Huanjing Yue; Jingyu Yang
[ { "authors": "N Carion; F Massa; G Synnaeve; N Usunier; A Kirillov; S Zagoruyko", "journal": "Springer", "ref_id": "b0", "title": "End-to-end object detection with transformers", "year": "2020" }, { "authors": "B Cheng; I Misra; A G Schwing; A Kirillov; R Girdhar", "journal": "", "ref_id": "b1", "title": "Masked-attention mask transformer for universal image segmentation", "year": "1290" }, { "authors": "H Ding; C Liu; S Wang; X Jiang", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b2", "title": "Vlt: Visionlanguage transformer and query generation for referring segmentation", "year": "2022" }, { "authors": "A Dosovitskiy; L Beyer; A Kolesnikov; D Weissenborn; X Zhai; T Unterthiner; M Dehghani; M Minderer; G Heigold; S Gelly", "journal": "", "ref_id": "b3", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "X Gu; W Lin; Y Cui", "journal": "", "ref_id": "b4", "title": "Openvocabulary object detection via vision language knowldistillation", "year": "2021" }, { "authors": "K He; G Gkioxari; P Dollár; R Girshick", "journal": "", "ref_id": "b5", "title": "Mask r-cnn", "year": "2017" }, { "authors": "R Hu; M Rohrbach; T Darrell", "journal": "Springer", "ref_id": "b6", "title": "Segmentation from natural language expressions", "year": "2016-10-11" }, { "authors": "Z Hu; G Feng; J Sun; L Zhang; H Lu", "journal": "", "ref_id": "b7", "title": "Bidirectional relationship inferring network for referring image segmentation", "year": "2020" }, { "authors": "Y Jing; T Kong; W Wang; L Wang; L Li; T Tan", "journal": "", "ref_id": "b8", "title": "Locate then segment: A strong pipeline for referring image segmentation", "year": "2021" }, { "authors": "S Kazemzadeh; V Ordonez; M Matten; T Berg", "journal": "", "ref_id": "b9", "title": "Referitgame: Referring to objects in photographs of natural scenes", "year": "2014" }, { "authors": "N Kim; D Kim; C Lan; W Zeng; S Kwak", "journal": "", "ref_id": "b10", "title": "Restr: Convolution-free referring image segmentation using transformers", "year": "2022" }, { "authors": "A Kirillov; E Mintun; N Ravi; H Mao; C Rolland; L Gustafson; T Xiao; S Whitehead; A C Berg; W.-Y Lo", "journal": "", "ref_id": "b11", "title": "Segment anything", "year": "2023" }, { "authors": "J Li; D Li; S Savarese; S Hoi", "journal": "", "ref_id": "b12", "title": "Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models", "year": "2023" }, { "authors": "J Li; R Selvaraju; A Gotmare; S Joty; C Xiong; S C H Hoi", "journal": "Advances in neural information processing systems", "ref_id": "b13", "title": "Align before fuse: Vision and language representation learning with momentum distillation", "year": "2021" }, { "authors": "L H Li; P Zhang; H Zhang; J Yang; C Li; Y Zhong; L Wang; L Yuan; L Zhang; J.-N Hwang", "journal": "", "ref_id": "b14", "title": "Grounded language-image pre-training", "year": "2022" }, { "authors": "W Li; C Gao; G Niu; X Xiao; H Liu; J Liu; H Wu; H Wang", "journal": "", "ref_id": "b15", "title": "Unimo: Towards unified-modal understanding and generation via cross-modal contrastive learning", "year": "2020" }, { "authors": "Y Li; H Mao; R Girshick; K He", "journal": "Springer", "ref_id": "b16", "title": "Exploring plain vision transformer backbones for object detection", "year": "2022" }, { "authors": "T.-Y Lin; P Goyal; R Girshick; K He; P Dollár", "journal": "", "ref_id": "b17", "title": "Focal loss for dense object detection", "year": "2017" }, { "authors": "T.-Y Lin; M Maire; S Belongie; J Hays; P Perona; D Ramanan; P Dollár; C L Zitnick", "journal": "Springer", "ref_id": "b18", "title": "Microsoft coco: Common objects in context", "year": "2014-09-06" }, { "authors": "C Liu; H Ding; X Jiang", "journal": "", "ref_id": "b19", "title": "GRES: Generalized referring expression segmentation", "year": "2023" }, { "authors": "C Liu; Z Lin; X Shen; J Yang; X Lu; A Yuille", "journal": "", "ref_id": "b20", "title": "Recurrent multimodal interaction for referring image segmentation", "year": "2017" }, { "authors": "D Liu; H Zhang; F Wu; Z.-J Zha", "journal": "", "ref_id": "b21", "title": "Learning to assemble neural module tree networks for visual grounding", "year": "2019" }, { "authors": "Z Liu; Y Lin; Y Cao; H Hu; Y Wei; Z Zhang; S Lin; B Guo", "journal": "", "ref_id": "b22", "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "year": "2021" }, { "authors": "G Luo; Y Zhou; X Sun; L Cao; C Wu; C Deng; R Ji", "journal": "", "ref_id": "b23", "title": "Multi-task collaborative network for joint referring expression comprehension and segmentation", "year": "2020" }, { "authors": "J Mao; J Huang; A Toshev; O Camburu; A L Yuille; K Murphy", "journal": "", "ref_id": "b24", "title": "Generation and comprehension of unambiguous object descriptions", "year": "2016" }, { "authors": "F Milletari; N Navab; S.-A Ahmadi", "journal": "", "ref_id": "b25", "title": "V-net: Fully convolutional neural networks for volumetric medical image segmentation", "year": "2016" }, { "authors": "A Radford; J W Kim; C Hallacy; A Ramesh; G Goh; S Agarwal; G Sastry; A Askell; P Mishkin; J Clark", "journal": "", "ref_id": "b26", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": " Pmlr", "journal": "", "ref_id": "b27", "title": "", "year": "" }, { "authors": "H Shi; H Li; F Meng; Q Wu", "journal": "", "ref_id": "b28", "title": "Key-word-aware network for referring expression image segmentation", "year": "2018" }, { "authors": "M Toneva; A Sordoni; R T D Combes; A Trischler; Y Bengio; G J Gordon", "journal": "", "ref_id": "b29", "title": "An empirical study of example forgetting during deep neural network learning", "year": "2018" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; Ł Kaiser; I Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b30", "title": "Attention is all you need", "year": "2017" }, { "authors": "X Wang; R Girshick; A Gupta; K He", "journal": "", "ref_id": "b31", "title": "Nonlocal neural networks", "year": "2018" }, { "authors": "Z Wang; Y Lu; Q Li; X Tao; Y Guo; M Gong; T Liu", "journal": "", "ref_id": "b32", "title": "Cris: Clip-driven referring image segmentation", "year": "2022" }, { "authors": "C Wu; Z Lin; S Cohen; T Bui; S Maji", "journal": "", "ref_id": "b33", "title": "Phrasecut: Language-based image segmentation in the wild", "year": "2020" }, { "authors": "J Xu; S De Mello; S Liu; W Byeon; T Breuel; J Kautz; X Wang", "journal": "", "ref_id": "b34", "title": "Groupvit: Semantic segmentation emerges from text supervision", "year": "2022" }, { "authors": "L Xu; M H Huang; X Shang; Z Yuan; Y Sun; J Liu", "journal": "", "ref_id": "b35", "title": "Meta compositional referring expression segmentation", "year": "2023" }, { "authors": "X Yang; Z Li; H Xu; H Zhang; Q Ye; C Li; M Yan; Y Zhang; F Huang; S Huang", "journal": "", "ref_id": "b36", "title": "Learning Trajectory-Word Alignments for Video-Language Tasks", "year": "2023" }, { "authors": "Z Yang; J Wang; Y Tang; K Chen; H Zhao; P H Torr", "journal": "", "ref_id": "b37", "title": "Lavt: Language-aware vision transformer for referring image segmentation", "year": "2022" }, { "authors": "L Ye; M Rochan; Z Liu; Y Wang", "journal": "", "ref_id": "b38", "title": "Crossmodal self-attention network for referring image segmentation", "year": "2019" }, { "authors": "L Yu; Z Lin; X Shen; J Yang; X Lu; M Bansal; T L Berg", "journal": "", "ref_id": "b39", "title": "Mattnet: Modular attention network for referring expression comprehension", "year": "2018" }, { "authors": "L Yu; P Poirson; S Yang; A C Berg; T L Berg", "journal": "Springer", "ref_id": "b40", "title": "Modeling context in referring expressions", "year": "2016-10-11" }, { "authors": "L Zhou; H Palangi; L Zhang; H Hu; J Corso; J Gao", "journal": "", "ref_id": "b41", "title": "Unified vision-language pre-training for image captioning and vqa", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 262.89, 64.78, 267.76, 67.03 ], "formula_id": "formula_0", "formula_text": "× 2 1 2 1 2 i j i j 1 2 i j 1 2 i j Mask × 2" }, { "formula_coordinates": [ 3, 200.49, 88.18, 162.81, 103.84 ], "formula_id": "formula_1", "formula_text": "1 2 i j 1 2 i j 1 2 i j 1 2 i j 1 2 i j 1 2 1 2 i j i j 1 2 i j 1 2 i j × 2" }, { "formula_coordinates": [ 4, 54, 157.65, 72.07, 11.23 ], "formula_id": "formula_2", "formula_text": "F v3 ∈ R H×W ×C ." }, { "formula_coordinates": [ 4, 75.66, 212.53, 55.72, 12.87 ], "formula_id": "formula_3", "formula_text": "F l ∈ R Lt×C ′ ." }, { "formula_coordinates": [ 4, 99.18, 411.82, 193.32, 12.17 ], "formula_id": "formula_4", "formula_text": "F = CBA([MLP(F v1 ), MLP(F v2 )]),(1)" }, { "formula_coordinates": [ 4, 98.77, 529.07, 193.74, 12.17 ], "formula_id": "formula_5", "formula_text": "F v = CBA([MLP( F ), MLP(F v3 )]),(2)" }, { "formula_coordinates": [ 4, 54, 560.77, 238.5, 20.61 ], "formula_id": "formula_6", "formula_text": "F v ∈ R Lv×C , where L v is equal to H × W ." }, { "formula_coordinates": [ 4, 336.88, 62.41, 210.6, 222.15 ], "formula_id": "formula_7", "formula_text": "1 2 i j 1 2 i j 1 2 1 2 i j i j 1 2 i j 1 2 i j 1 2 1 2 i j i j 1 2 i j 1 2 i j 1 2 i j 1 2 i j Add & LayerNorm Add & LayerNorm (b) Language-Guided Attention (a) Vision-Guided Attention Attention Mask" }, { "formula_coordinates": [ 4, 381.55, 472.85, 176.45, 26.85 ], "formula_id": "formula_8", "formula_text": "Z v = F v W v , Z l = F l W l , A = Softmax(Z v Z ⊤ l + M)(3)" }, { "formula_coordinates": [ 4, 362.43, 559.77, 195.57, 19.7 ], "formula_id": "formula_9", "formula_text": "M(i, j) = 0 ifM (i, j) < τ -∞ otherwise ,(4)" }, { "formula_coordinates": [ 4, 371.3, 680.66, 134.91, 25.88 ], "formula_id": "formula_10", "formula_text": "F lav =LayerNorm(AZ l + Z v ), F val =LayerNorm(A ⊤ Z v + Z l )." }, { "formula_coordinates": [ 5, 54, 379.64, 69.83, 9.65 ], "formula_id": "formula_11", "formula_text": "F c := [F m , F val ]," }, { "formula_coordinates": [ 5, 98.56, 392.29, 193.94, 12.17 ], "formula_id": "formula_12", "formula_text": "Fc := [ Fm , Fval ] = MHSA(F c ) + F c ,(7)" }, { "formula_coordinates": [ 5, 143.92, 682, 140.83, 9.65 ], "formula_id": "formula_13", "formula_text": "L = L f + L d ,(" } ]
2024-03-12
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b13", "b14", "b28", "b33", "b40", "b41", "b26" ], "table_ref": [], "text": "ChatGPT [24], lanuched in November 2022, marked a seismic shift in the application of AI, sparking a \"wow\" moment that galvanized the tech industry. This innovative product catalyzed a flurry of investment in Generative AI.\nThe innovation journey continued in March 2023 with the introduction of GPT-4 1 , a large multimodal model, capable of processing both text and images, further captivated the industry by demonstrating the extensive capabilities of multimodal technologies. By the end of September 2023, GPT-4 with Vision (GPT-4V) was fully integrated into the Chat-GPT platform. Following this milestone, comprehensive user study reports [14,15,29,34,41,42,44] by computer vision researchers began to emerge, providing evaluations of its visual prowess. More recently, on the first anniversary of ChatGPT, November 6, 2023, OpenAI hosted its first DevDay, during which the GPT-4V API was released. This release opens new doors for the academic community to conduct extensive evaluations of its performance across a range of visual benchmarks, offering quantitative metrics beyond the limited scope of user studies.\nIn this paper, we evaluate GPT-4's performance in visual recognition tasks-one of the fundamental tasks in the field of computer vision-without any fine-tuning (i.e., in a zeroshot manner). We explore two main facets: linguistic and visual capabilities. (i) Regarding linguistic capabilities, we investigate how GPT-4's language proficiency can bolster visual recognition. It's widely recognized that the largescale image-text pre-trained model CLIP [27] has built a bridge between vision and text, enabling zero-shot visual recognition by calculating similarity scores between category name embeddings and image embeddings. Building on CLIP's foundation, we aim to utilize GPT-4's extensive linguistic knowledge to craft more nuanced and detailed descriptions of category names, thus enhancing intra-class diversity and inter-class distinguishability, offering a refined alternative to the use of basic category names for zero-shot recognition. (ii) As for visual capabilities, the evaluation is quite straightforward: we directly feed the image or images (applicable to video and point cloud data) along with candidate categories. By employing prompts, we instruct GPT-4V to organize the candidate categories by relevance to the visual content, thereby obtaining Top-5 prediction.\nTo conduct a comprehensive evaluation, we included three distinct modalities: images, videos, and point clouds, across 16 well-known and publicly available classification benchmarks [1, 4-6, 8-13, 21, 23, 26, 30, 39, 40], as showcased in Figure 1. For video datasets, we implemented a uniform sampling of frames to create multi-image inputs. For point cloud data, we process the 3D shape into multiple 2D rendered images. For each dataset, we offer the zeroshot performance of CLIP, a representative model among web-scale pre-trained vision-language models (VLMs), as a reference. The results includes four backbones: Ope-nAI CLIP (pre-trained on 400M image-text pairs) with ViT-B/32, ViT-B/16, and ViT-L/14, as well as the more recent and larger EVA CLIP's ViT-E/14, which boasts a staggering 4.4B parameters (14× that of ViT-L), and has been pretrained on 2B image-text pairs. Our research highlights that GPT-4's linguistic capabilities play a crucial role in boosting zero-shot visual recognition, delivering a notable average increase of 7% in top-1 absolute accuracy across 16 datasets. This leap in performance is driven by GPT-4's rich knowledge, enabling it to craft detailed descriptions for diverse categories. Simultaneously, GPT-4's prowess in visual recognition, evaluated across 16 datasets, is almost on par with EVA-CLIP's ViT-E. This is particularly evident in video datasets such as UCF-101 and HMDB-51, where GPT-4 distinctly surpasses the performance of ViT-E, highlighting its effectiveness in handling contextual visual content reasoning. For more detailed results, analyses, and experimental details, please refer to the experimental section.\nTo the best of our knowledge, this study is the first quantitative evaluation of zero-shot visual recognition capabilities using GPT-4V across three modalities-images, videos, and point clouds-over 16 popular visual benchmarks. We believe that the empirical evidence and prompts provided herein are worth knowing. We hope our data points and experience will contribute meaningfully to future research." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b40", "b14", "b28", "b15", "b17", "b13", "b41", "b33", "b26", "b6", "b18", "b34", "b36", "b37", "b42", "b45", "b21", "b1" ], "table_ref": [], "text": "Preliminary Explorations of GPT-4V. Recent studies have undertaken detailed case studies on GPT-4V's capabilities across diverse tasks. Prior research [41] delved into the reasoning skills of foundational models within visual domains from a qualitative perspective. Subsequently, GPT-4V's performance has been examined in various visuallanguage tasks, including but not limited to video understanding [15], optical character recognition (OCR) [29], image context reasoning [16], recommender system [44], mathematical logic [18], medical imaging analysis [14,42], anomaly detection [3], social media analysis [20] and autonomous driving [34]. However, a gap remains in these studies: most have concentrated on qualitative, initial explorations without extensive quantitative analysis utilizing established visual benchmarks. Such analysis is essential for a comprehensive validation of GPT-4V's visual understanding capabilities. The recent availability of its API2 now enables large-scale quantitative evaluations. Enhancing Zero-shot Visual Recognition with LLMs. The web-scale image-text pre-training model, CLIP [27], has established a pivotal connection between visual and textual domains. Numerous subsequent studies have extended this model to video understanding [7,19,33,[35][36][37][38] or point cloud recognition [43,46]. With the ascent of Large Language Models (LLMs), there is increasing interest in harnessing class-specific knowledge from LLMs can improve CLIP's prediction accuracy. [22] leveraged GPT-3 [2] to create text descriptions for unseen class labels and compare the image embedding with the embeddings of the " }, { "figure_ref": [], "heading": "GPT-4", "publication_ref": [], "table_ref": [], "text": "Rank the {list of categories} from most to least likely to accurately describe the input image / images, and select the top 5 most relevant categories." }, { "figure_ref": [], "heading": "GPT-4V", "publication_ref": [], "table_ref": [], "text": "Text Encoder Image Encoder descriptions. [28] further developed this concept by employing ChatGPT to structure the classes hierarchically to boost zero-shot image recognition. In our study, rather than constructing a hierarchical structure, we prompt GPT-4 to produce multi-sentence descriptions for categories, examining the effectiveness of this straightforward approach in image, video, and point cloud recognition." }, { "figure_ref": [], "heading": "••• K sentence embeddings", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2", "fig_4" ], "heading": "Methodology", "publication_ref": [ "b26", "b24", "b30" ], "table_ref": [], "text": "As demonstrated in Figure 2, we evaluate GPT-4's linguistic and visual capabilities in zero-shot visual recognition. This section will introduce the specific details. To align with the input requirements of CLIP [27] and GPT-4V [25], apart from image classification tasks, the inputs for video and point cloud classification must be trans-formed into images. As illustrated in Figure 3, this process involves the transformation of video and point cloud data into image sets. Specifically, for input videos, we uniformly sample eight frames to serve as multi-image inputs. Regarding point clouds, we follow MVCNN [31] to render multiple views around the object in a uni-directional manner at an angle of 30 degrees. To reduce testing costs, we use six frontally rendered images in our process. This prepares us to carry out subsequent evaluations." }, { "figure_ref": [], "heading": "Data Processing", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2" ], "heading": "Exploration of Linguistic Capabilities", "publication_ref": [], "table_ref": [], "text": "Our objective is to explore how the extensive linguistic knowledge of GPT-4 can be leveraged to enhance visual recognition performance. Building on the cross-modal bridge established by CLIP through large-scale image-text pre-training, we aim to enrich textual descriptions beyond using simple category names to better align with visual content. As shown in Figure 2(a), we begin by guiding GPT-4 to generate K sentences describing each category in the dataset using appropriate prompts. These K sentences are then converted into K text embeddings via CLIP's frozen text encoder, while the visual signal is encoded into a vision embedding by CLIP's frozen image encoder (e.g., for video and point cloud data, the vision embedding is obtained by global averaging pooling over multiple frame embeddings or viewpoint embeddings). Subsequently, these text embeddings are compared with the vision embedding to calculate K similarity scores. After normalization with a Softmax function and averaging, we obtain a consolidated similarity score for each category in relation to the visual input. Given a dataset with C categories, each visual input yields C similarity scores, which are then ranked from highest to lowest to determine the final prediction. 1. The statistics of these evaluated datasets." }, { "figure_ref": [ "fig_2" ], "heading": "Evaluation of Visual Capabilities", "publication_ref": [], "table_ref": [], "text": "The recent release of the GPT-4V API allows for a comprehensive evaluation of visual benchmarks, moving beyond limited case studies within the ChatGPT web interface. The evaluation process, outlined in Figure 2(b), is simple and straightforward. Visual samples, whether a single image or a collection, are inputted along with an appropriate text prompt. This prompt guides GPT-4V to assess the dataset's categories, sorting them based on their relevance to the provided visual content, and produces the top-5 prediction results. These predictions are subsequently compared against the ground truth of the dataset to derive both top-1 and top-5 accuracy metrics, providing a quantitative assessment of GPT-4V's visual understanding capabilities. Further details about our prompts is available in Code Repo." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [], "table_ref": [], "text": "This study evaluates 16 visual datasets across images, videos, and point clouds. The evaluation employs the widely recognized validation sets for these benchmarks, with Table 1 providing detailed statistics for each dataset." }, { "figure_ref": [ "fig_5", "fig_6" ], "heading": "Implementation Details", "publication_ref": [ "b25" ], "table_ref": [], "text": "GPT-4 Generated Descriptions. Using the GPT-4 API (version gpt-4-1106-preview), we generate K descriptive sentences for each category, with K defaulting to be 20. As an example, for the \"British Shorthairs\" category from the Oxford Pets dataset [26], we present our prompts alongside GPT-4's responses in Figure 4.\nUtilizing GPT-4 with Vision. In our study, we employ the GPT-4V API (specifically, gpt-4-vision-preview) to evalu-I want you to act as a pet expert. I will give you a word and you task is to give me 20 sentences to describe the word. Your description must accurately revolve around this word and be as objective, detailed and diverse as possible. In addition, the subject of your description is a pet photograph. The word is \"British Shorthairs\".\n1. British Shorthairs have round, expressive eyes that are typically copper or gold, adding to their endearing look. 2. This breed's coat comes in a variety of colors and patterns, with the blue-gray variety being one of the most popular and iconic. 3. The British Shorthair, with its dense and plush coat, has a distinctively chunky body and a broad face that give it a teddy bear-like appearance. …… [\"brush_hair\", \"sit\", \"stand\", \"talk\", \"smile\"] [\"chair\", \"sofa\", \"desk\", \"table\", \"night_stand\"] ate 16 different benchmarks. For videos, we select 8 frames via uniform sampling for API processing, and for point clouds, we provide images from six perspectives. Figure 5 showcases the interaction with GPT-4V, highlighting both the prompts used and the subsequent responses for evaluations across images, videos, and point clouds." }, { "figure_ref": [ "fig_8", "fig_10" ], "heading": "Gains from GPT Prompts", "publication_ref": [ "b26", "b12", "b9", "b25", "b10", "b0", "b11", "b29", "b8" ], "table_ref": [], "text": "Table 2 showcases our evaluation results on 16 datasets and their average performance. For each dataset, we've detailed results using four different CLIP backbones, including OpenAI CLIP [27]'s configurations of ViT-B/32, ViT-B/16, and ViT-L/14, each pre-trained with 400 million image-text pairs, and the EVA CLIP [32]'s ViT-E/14, which is notable for its 4.4 billion parameters (14× that of ViT-L/14) and training on 2 billion image-text pairs. We will delve into an analysis of these results next.\nDescriptions generated by GPT-4 distinctly surpass the CLIP baseline in a majority of datasets, boasting an average top-1 accuracy improvement of 7% across 16 datasets. This consistent enhancement across all three modalities-images, videos, and point clouds-highlights the method's potent generalizability. More specifically:\n1) For image datasets, with RAF-DB [13] as a focal point, GPT Prompts enable an over 20% increment in accuracy across various backbones. For other datasets like Eu-roSAT [10] satellite image classification, Flower [23] finegrained recognition, Pets [26] fine-grained recognition, Aircraft [21] fine-grained classification, and Caltech101 [8] object classification, we observe boosts of approximately 9-15%. Smaller gains in Stanford Cars [11] and Food101 [1] suggest that a high density of similar categories may lead to ambiguous descriptions, confusing the CLIP model. In general, larger CLIP models achieve better zero-shot recognition performance on image tasks, and GPT-generated prompts reliably offer additional enhancements.\n2) On video datasets, especially HMDB-51 [12] and UCF101 [30], we observe astonishing gains of up to 11-15%, indicating that rich descriptions of human actions align better with video content than simpler phrases. The Something-Something V1 (SSV1) [9] dataset, however, exhibits poor performance with the CLIP baseline (< 4% Top-1) due to the lack of temporal modeling. Unlike Kinetics, UCF, and HMDB datasets, which can be recognized through scenes and object appearances as shown in Figure 6, SSV1 demands the understanding of complex objectobject and human-object interactions, requiring robust temporal and motion modeling for correct recognition. Hence, activities cannot be inferred merely from individual frames (e.g., Pushing something so it spins), as demonstrated in Figure 7. In essence, with scene-based video recognition datasets, the larger the CLIP model, the greater the zeroshot performance, a trend consistent with image tasks where GPT Prompts lead to additional gains. Yet, in datasets where temporal modeling is crucial, CLIP's simple frame averaging strategy falls short, and GPT prompts cannot compensate for this deficiency.\n3) For point cloud datasets, employing multiple rendered viewpoints for zero-shot recognition with CLIP achieves noteworthy accuracy, mirroring the positive effects seen with image and scene-based video datasets. The integration of GPT Prompts further amplifies these positive results." }, { "figure_ref": [ "fig_6", "fig_8", "fig_10" ], "heading": "Zero-shot Visual Performance of GPT-4V", "publication_ref": [ "b12", "b4", "b9", "b25", "b0", "b5", "b10", "b39", "b8", "b29", "b11" ], "table_ref": [], "text": "To evaluate the visual capabilities of GPT-4V, as shown in Table 2, we conduct quantitative evaluation across 16 datasets. Utilizing straightforward prompts (depicted in Figure 5), we obtain predictions from GPT-4V. Analyzing the average results from these 16 datasets, GPT-4V's top-1 accuracy approaches that of EVA's ViT-E. Specifically:\n1) On image datasets, GPT-4V significantly outstrips the largest CLIP model EVA ViT-E on the RAF-DB dataset [13] (68.7% vs. 31.0%), demonstrating a strong capability in facial expression recognition. Additionally, it outperforms EVA ViT-E in the fine-grained task of aircraft recognition [21] (56.6% vs. 50.6%) and achieves comparable results in Caltech101 object recognition [8]. GPT-4V's ability to classify textures [5], satellite images [10], and recog- Table 2. Main results in zero-shot visual recognition across the 16 datasets, reporting Top-1 and Top-5 accuracy (%). We also report the parameter count of CLIP's image backbone for reference. \"Baseline\" denotes the direct use of category names. \"GPT Prompts\" refers to the utilization of multi-sentence descriptions generated by GPT-4 Turbo API for category names. \"GPT-4V\" indicates the use of the GPT-4 Turbo with vision API for visual content recognition.\nnize pets [26] situates it between the performance levels of CLIP ViT-L and EVA ViT-E. However, it slightly lags behind ViT-L in the more specialized areas of flower [23] and food [1] recognition. In the broad-spectrum challenge of ImageNet [6] 1k-class recognition and car type identification [11], GPT-4V's accuracy falls between that of ViT-B/32 and ViT-B/16. In scenarios such as scene recognition [40],\nGPT-4V's efficacy is close to ViT-B/32, illustrating its competitive yet varying performance across a spectrum of visual tasks. It's noteworthy that, as per the GPT-4V documentation3 , the low-resolution version of the model scales images to 512×512, while the high-resolution version scales \"Hand-crafted Prompt\" denotes a fixed template, such as \"A photo of a {category name}.\" for image datasets, \"A video of a person {category name}.\" for video datasets, and \"A point cloud depth map of a {category name}.\" for point cloud datasets. \"GPT Prompts\" refers to descriptive sentences generated by GPT-4. \"Hand-crafted Prompt + GPT Prompts\" refers to a concatenation of a template with each descriptive sentence generated by GPT-4, such as \"A photo of a {Category}. {GPT-generated sentence}\".\nto 2048×2048. As Table 1 illustrates, many of the datasets feature relatively lower resolutions, with the majority significantly below 512×512. This discrepancy may impact GPT-4V's recognition accuracy, as seen in the case of the EuroSAT dataset, which has a resolution of 64×64.\n2) For video datasets, it's important to highlight that Something-Something V1 [9] focuses on modeling temporal relationships, whereas UCF101 [30], HMDB51 [12], and Kinetics [4] are less dependent on such temporal relationships, meaning actions can often be inferred from individual frames, as shown in Figure 6. GPT-4V performs well on Kinetics, UCF101, and HMDB51, significantly surpassing EVA ViT-E's performance on UCF101 and HMDB51: achieving 83.7% vs. 74.8% on UCF, and an even more significant 63.2% vs. 41.5% on HMDB. This superior performance may be due to GPT-4V's adeptness at drawing inferences from the context of adjacent frames. Notably, GPT-4V's performance on the SSV1 dataset is also markedly poor, at just 4.6% top-1 accuracy, which aligns with the CLIP baseline. This is exemplified in Figure 7, where isolating each frame does not provide enough context to ascertain the person's activity; only through the analysis of motion information across a sequence of frames can we make a prediction. Such results highlight the limitations of GPT-4V in temporal modeling due to the absence of a video encoder capable of processing temporal dynamics and motions.\n3) For point cloud datasets, GPT-4V demonstrates excellent performance with just six rendered images, on par with EVA ViT-E. It stands to reason that adding more views would likely enhance the recognition accuracy even further.\nIn this section, all the above results are intended to provide baseline data and experience for future research, encouraging the development of more effective prompts to guide GPT-4V towards more accurate recognition." }, { "figure_ref": [ "fig_11" ], "heading": "Ablation Studies on GPT Prompts", "publication_ref": [ "b39", "b5", "b10", "b9", "b12", "b9" ], "table_ref": [ "tab_4" ], "text": "Here we present several ablation studies demonstrating the impact of prompts on CLIP's zero-shot performance. Impact of different prompts. datasets. Augmenting this with Hand-crafted Prompt combined with the category names leads to further improvements in most datasets, showcasing the method's robustness. We then explore the effectiveness of employing multiple GPT-generated descriptive sentences related to category names. We find that GPT Prompt outperforms the baseline in 14 datasets. Figure 8 showcases the performance enhancement of GPT Prompts over category names in certain categories. Also, GPT Prompt can achieve better performance than Hand-crafted Prompts in 10 datasets. Our conjecture is that single category names may convey more global concepts, while the fine-grained details in generated descriptions are likely to align more closely with the visual content, thus amplifying inter-class distinctiveness. The strategy of generating multiple descriptive sentences may potentially further augment this effect.\nHowever, it's noteworthy that GPT Prompts are either below or roughly on par with Hand-crafted Prompt in 6 datasets, particularly in SUN397 [40], ImageNet-1k [6], Oxford Cars [11], and Kinetics-400 [4]. These datasets generally have a large number of categories with an emphasis on highly fine-grained classification. For such closely similar categories (like similar cars or scenes), richer descriptions generated may not be as distinctive as simply us- ing the category name. Therefore, we consider combining \"Hand-crafted Prompt + GPT Prompts\" to amalgamate the advantages of both, which has led to improved results in 11 datasets. For the 4 datasets (i.e., EuroSAT [10], RAF-DB [13], Flower102 [23] and ModelNet10 [39]) where GPT Prompts demonstrate a clear advantage, the integration of Hand-crafted Prompt has been deemed unnecessary.\nImpact of sentence quantity generated by GPT. Our exploration also delved into the effect of the number of descriptive sentences generated by GPT-4 on zero-shot performance. Taking the EuroSAT [10] dataset as an example, as shown in Table 4, performance with only one generated sentence was lower than using the category name alone. However, an increase to three sentences led to a noticeable improvement and surpassed the baseline (42.9% vs. 40.2%). With five sentences, there was a substantial performance boost. In pursuit of identifying a saturation point for this improvement, we observed that increasing to 20 sentences brought about minimal additional benefits. Consequently, we adopt the generation of 20 sentences as the default setting for our experiments." }, { "figure_ref": [ "fig_13" ], "heading": "Special Cases and Discussion on GPT-4V", "publication_ref": [ "b4" ], "table_ref": [ "tab_5" ], "text": "In this section, we primarily present some special phenomena observed during the evaluation, provided for the reference of future researchers. Batch Testing vs. Single Testing. Our initial results were released before December 2023, during a period when we were constrained by OpenAI's daily API request limit of 100 per account. This led us to implement Batch Testing, where one request yielded results for multiple samples. Moving closer to March 2024, the limits were removed for tier 5 accounts, enabling us to switch back to standard Sin- gle Testing, updating all datasets with one result per request. Table 5 shows a comparison of these two methods. The results from batch testing do not align perfectly with those from standard single testing. This misalignment could stem from several factors: 1) There's a notable probability of result misalignment, i.e., the results intended for sample A may actually correspond to sample B. Furthermore, we've observed instances of both repeated predictions for the same sample and missing predictions for others. 2) Given that GPT-4V can process all samples in a batch simultaneously, the content of these samples might interfere with the results of individual samples. Therefore, we recommend that future work should invariably employ single testing (i.e., the API processes only one sample at a time) to ensure the accuracy of the tests.\nPredict categories beyond the given list. In some instances, GPT-4V predicted categories that were not included in the given category list. For example, during the evaluation of the texture recognition dataset DTD [5], GPT-4V might respond with: \"Note: As there were no categories provided that perfectly matched some images (such as straw-like), I have used the most comparable or related terms from the list provided. Additionally, not all images may have an exact or obvious matching category within the provided list, and I've estimated the most relevant categories based on observable texture patterns in the images.\" In such cases, we tried to restrict the predictions to the provided category list through the prompts, but this proved ineffective. To proceed with the evaluation, we chose to exclude these predictions that were not within the given list.\nSafety system in GPT-4V. Throughout our dataset evalua-tions, we stumbled upon specific instances, as depicted in Figure 9, where GPT-4V refused to generate predictions, stating: \"Your input image may contain content that is not allowed by our safety system.\" We surmise that this precautionary mechanism is designed to ensure that GPT-4V adheres to ethical guidelines by avoiding engagement with potentially sensitive or inappropriate content.\nGPT-4V API Costs. We offer an estimate that using the GPT-4V API for one testing round across all datasets costs about $4000 for reader's reference." }, { "figure_ref": [], "heading": "Conclusion and Limitation", "publication_ref": [ "b16", "b44" ], "table_ref": [], "text": "This work aims to quantitatively evaluating the linguistic and visual capabilities of the current state-of-the-art large multimodal model GPT-4 in zero-shot visual recognition tasks. To ensure a comprehensive evaluation, we have conducted experiments across three modalities-images, videos, and point clouds-spanning a total of 16 benchmarks. We hope our empirical study and experience will benefit the community, fostering the evolution of future multimodal models. Limitations: 1) This study has focused solely on fundamental visual recognition tasks. A comprehensive quantitative analysis of other tasks, such as object detection, is necessary to truly gauge the breadth of these models' capabilities in analyzing complex visual information. 2) This work is limited to the evaluation of GPT-4 alone. Future efforts could include quantitative comparisons of various multimodal models (e.g., LLaVA [17], MiniGPT-4 [45], etc.), enhancing the breadth and depth of our analysis. 3) The current method of prompting GPT-4V is quite straightforward, raising concerns that an overly lengthy category list will increase the number of input tokens, potentially leading to adverse effects. Designing more effective prompts may further unlock GPT-4V's capabilities." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "We release our code at https: //github.com/whwu95/GPT4Vis." } ]
This paper does not present a novel method. Instead, it delves into an essential, yet must-know baseline in light of the latest advancements in Generative Artificial Intelligence (GenAI): the utilization of GPT-4 for visual understanding. Our study centers on the evaluation of GPT-4's linguistic and visual capabilities in zero-shot visual recognition tasks: Firstly, we explore the potential of its generated rich textual descriptions across various categories to enhance recognition performance without any training. Secondly, we evaluate GPT-4's visual proficiency in directly recognizing diverse visual content. We conducted extensive experiments to systematically evaluate GPT-4's performance across images, videos, and point clouds, using 16 benchmark datasets to measure top-1 and top-5 accuracy. Our findings show that GPT-4, enhanced with rich linguistic descriptions, significantly improves zero-shot recognition, offering an average top-1 accuracy increase of 7% across all datasets. GPT-4 excels in visual recognition, outshining OpenAI-CLIP's ViT-L and rivaling EVA-CLIP's ViT-E, particularly in video datasets HMDB-51 and UCF-101, where it leads by 22% and 9%, respectively. We hope this research contributes valuable data points and experience for future studies.
GPT4Vis: What Can GPT-4 Do for Zero-shot Visual Recognition?
[ { "figure_caption": "Figure 1 .1Figure 1. An overview of 16 evaluated popular benchmark datasets, comprising images, videos, and point clouds. The image benchmarks include tasks such as texture recognition, satellite image classification, scene recognition, facial expression recognition, as well as finegrained object classification. The video datasets encompass diverse human actions captured from various viewpoints and scenes. The point cloud datasets provide valuable information that can be projected onto multi-view depth maps for visual recognition.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "CLIPList K sentences to describe the {category}.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Zero-shot visual recognition leveraging GPT-4's linguistic and visual capabilities. (a) We built upon the visual-language bridge established by CLIP [27] and employed the rich linguistic knowledge of GPT-4 to generate additional descriptions for categories, exploring the benefits to visual recognition. (b) We present visual content (i.e., single or multiple images) along with a category list, and prompt GPT-4V to generate the top-5 prediction results.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Uniform sampling of videos into multiple images. (b) Projection of point clouds into multiple images from six views.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Processing video and point cloud data into images.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Sentences generated by GPT-4 for \"British Shorthairs\".", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Prompts for image, video, and point cloud datasets: (a) An example from RAF-DB [13] illustrates 7-class facial expression recognition. (b) A video example from HMDB-51 [12] demonstrates 51-class action recognition, where ellipses indicate category names omitted due to space constraints. (c) An example from ModelNet10 [39] for point cloud classification across 10 categories, where ellipses again indicate the truncation of category names owing to space constraints. Please zoom in for best view.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "(a) Ground Truth: Biking through snow. (b) Ground Truth: Grinding meat.", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Two video examples from the Kinetics dataset [4] accurately predicted by GPT-4V.", "figure_data": "", "figure_id": "fig_8", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "(a) Ground Truth: Pushing something so it spins ( ). GPT-4V Prediction: Pretending to pick something up ( ). (b) Ground Truth: Putting something into something ( ). GPT-4V Prediction: Opening something ( ).", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Two video examples from the Something-Something dataset [9] incorrectly predicted by GPT-4V.", "figure_data": "", "figure_id": "fig_10", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. GPT Prompt vs. Category Name in some classes.", "figure_data": "", "figure_id": "fig_11", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "(a) \"bikini\" image (b) \"massaging legs\" video (c) \"arm wrestling\" video", "figure_data": "", "figure_id": "fig_12", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 9 .9Figure 9. Some examples rejected by GPT-4V. (a) is an image from ImageNet-1K, (b) and (c) are videos from Kinetics-400.", "figure_data": "", "figure_id": "fig_13", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "DTD [5]47300×3001,692EuroSAT [10]1064×648,100SUN397 [40]397969×776 19,850RAF-DB [13]7100×1003,068Caltech101 [8]101300×2002,465ImageImageNet [6]1000469×387 50,000FGVC-Aircraft [21]1001098×747 3,333Flower102 [23]102667×5002,463Stanford Cars [11]196360×2408,041Food101 [1]101500×350 30,300Oxford Pets [26]37500×3503,669UCF-101 [30] (Split 1)101320×2403,783VideoHMDB-51 [12] (Split 1) Kinetics-400 [4]51 400340×256 420×320 19,796 1,530Sth-Sth V1 [9]174176×100 11,522Point Cloud ModelNet10 [39]10224×224908", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Hand-crafted Prompt + GPT Prompts 46.9 48.0 63.3 34.5 92.8 63.7 21.9 70.1 61.2 80.8 89.3 52.7 69.9 47.2 42.0 Evaluating the impact of different prompts on CLIP-based zero-shot visual recognition in image, video, and point cloud datasets.", "figure_data": "Prompt for CLIP ViT-B/32DTD SAT SUN RAF Caltech ImageNet Aircraft Flower Cars Food Pets K400 UCF HMDB MNet10Baseline: Category name42.1 40.2 59.2 22.4 86.859.016.661.6 58.9 78.0 79.9 47.9 59.9 38.442.9Hand-crafted Prompt43.7 45.3 62.0 24.2 91.162.019.567.0 60.4 80.5 87.4 49.8 61.5 43.040.1GPT Prompts44.6 49.4 57.7 45.8 90.859.621.571.8 53.0 80.0 87.4 47.9 69.4 44.547.8", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Table 3 comprehensively exhibits the results of different prompts on the zero-shot visual recognition performance of CLIP across various", "figure_data": "", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The impact of different numbers of sentences generated by GPT on EuroSAT dataset. Backbone: CLIP ViT-B/32.", "figure_data": "Method# Sentences Top-1(%)Baseline: Category name140.2138.4342.9GPT Prompts5 1047.4 49.12049.43049.9DTD SUN RAF Caltech ImgNet Aircraft Flower Cars Food PetsBatch 59.1 57.0 58.5 95.558.536.0 70.6 58.3 79.0 92.6Single 57.7 59.2 68.7 93.763.156.6 69.1 62.7 86.2 90.8", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Impact of Single vs. Batch Testing on GPT-4V performance evaluation, reporting top-1 accuracy (%).", "figure_data": "", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" } ]
Wenhao Wu; Huanjin Yao; Mengxi Zhang; Yuxin Song; Wanli Ouyang; Jingdong Wang
[ { "authors": "Lukas Bossard; Matthieu Guillaumin; Luc Van Gool", "journal": "", "ref_id": "b0", "title": "Food-101-mining discriminative components with random forests", "year": "2014" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b1", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Yunkang Cao; Xiaohao Xu; Chen Sun; Xiaonan Huang; Weiming Shen", "journal": "", "ref_id": "b2", "title": "Towards generic anomaly detection and understanding: Large-scale visual-linguistic model (gpt-4v) takes the lead", "year": "2023" }, { "authors": "Joao Carreira; Andrew Zisserman", "journal": "", "ref_id": "b3", "title": "Quo vadis, action recognition? a new model and the kinetics dataset", "year": "2017" }, { "authors": "Mircea Cimpoi; Subhransu Maji; Iasonas Kokkinos; Sammy Mohamed; Andrea Vedaldi", "journal": "", "ref_id": "b4", "title": "Describing textures in the wild", "year": "2014" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "", "ref_id": "b5", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Bo Fang; Wenhao Wu; Chang Liu; Yu Zhou; Yuxin Song; Weiping Wang; Xiangbo Shu; Xiangyang Ji; Jingdong Wang", "journal": "", "ref_id": "b6", "title": "Uatvr: Uncertainty-adaptive text-video retrieval", "year": "2023" }, { "authors": "Li Fei-Fei; Rob Fergus; Pietro Perona", "journal": "", "ref_id": "b7", "title": "Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories", "year": "2004" }, { "authors": "Raghav Goyal; Samira Ebrahimi Kahou; Vincent Michalski; Joanna Materzynska; Susanne Westphal; Heuna Kim; Valentin Haenel; Ingo Fruend; Peter Yianilos; Moritz Mueller-Freitag", "journal": "", "ref_id": "b8", "title": "The\" something something\" video database for learning and evaluating visual common sense", "year": "2017" }, { "authors": "Patrick Helber; Benjamin Bischke; Andreas Dengel; Damian Borth", "journal": "IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing", "ref_id": "b9", "title": "Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification", "year": "2019" }, { "authors": "Jonathan Krause; Michael Stark; Jia Deng; Li Fei-Fei", "journal": "", "ref_id": "b10", "title": "3d object representations for fine-grained categorization", "year": "2013" }, { "authors": "Hildegard Kuehne; Hueihan Jhuang; Estíbaliz Garrote; Tomaso Poggio; Thomas Serre", "journal": "", "ref_id": "b11", "title": "Hmdb: a large video database for human motion recognition", "year": "2011" }, { "authors": "Shan Li; Weihong Deng; Junping Du", "journal": "IEEE", "ref_id": "b12", "title": "Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild", "year": "2017" }, { "authors": "Yingshu Li; Yunyi Liu; Zhanyu Wang; Xinyu Liang; Lingqiao Liu; Lei Wang; Leyang Cui; Zhaopeng Tu; Longyue Wang; Luping Zhou", "journal": "medRxiv", "ref_id": "b13", "title": "A comprehensive study of gpt-4v's multimodal capabilities in medical imaging", "year": "2023" }, { "authors": "Kevin Lin; Faisal Ahmed; Linjie Li; Chung-Ching Lin; Ehsan Azarnasab; Zhengyuan Yang; Jianfeng Wang; Lin Liang; Zicheng Liu; Yumao Lu", "journal": "", "ref_id": "b14", "title": "Mm-vid: Advancing video understanding with gpt-4v (ision)", "year": "2023" }, { "authors": "Fuxiao Liu; Tianrui Guan; Zongxia Li; Lichang Chen; Yaser Yacoob; Dinesh Manocha; Tianyi Zhou", "journal": "", "ref_id": "b15", "title": "Hallusionbench: You see what you think? or you think what you see? an image-context reasoning benchmark challenging for gpt-4v (ision), llava-1.5, and other multi-modality models", "year": "2023" }, { "authors": "Haotian Liu; Chunyuan Li; Qingyang Wu; Yong Jae Lee", "journal": "", "ref_id": "b16", "title": "Visual instruction tuning", "year": "2023" }, { "authors": "Pan Lu; Hritik Bansal; Tony Xia; Jiacheng Liu; Chunyuan Li; Hannaneh Hajishirzi; Hao Cheng; Kai-Wei Chang; Michel Galley; Jianfeng Gao", "journal": "", "ref_id": "b17", "title": "Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts", "year": "2023" }, { "authors": "Huaishao Luo; Lei Ji; Ming Zhong; Yang Chen; Wen Lei; Nan Duan; Tianrui Li", "journal": "Neurocomputing", "ref_id": "b18", "title": "Clip4clip: An empirical study of clip for end to end video clip retrieval and captioning", "year": "2022" }, { "authors": "Hanjia Lyu; Jinfa Huang; Daoan Zhang; Yongsheng Yu; Xinyi Mou; Jinsheng Pan; Zhengyuan Yang; Zhongyu Wei; Jiebo Luo", "journal": "", "ref_id": "b19", "title": "Gpt-4v (ision) as a social media analysis engine", "year": "2023" }, { "authors": "Subhransu Maji; Esa Rahtu; Juho Kannala; Matthew Blaschko; Andrea Vedaldi", "journal": "", "ref_id": "b20", "title": "Fine-grained visual classification of aircraft", "year": "2013" }, { "authors": "Sachit Menon; Carl Vondrick", "journal": "ICLR", "ref_id": "b21", "title": "Visual classification via description from large language models", "year": "2022" }, { "authors": "Maria-Elena Nilsback; Andrew Zisserman", "journal": "ICVGIP", "ref_id": "b22", "title": "Automated flower classification over a large number of classes", "year": "2008" }, { "authors": " Openai", "journal": "", "ref_id": "b23", "title": "Chatgpt", "year": "2023" }, { "authors": " Openai", "journal": "", "ref_id": "b24", "title": "Gpt-4v(ision) system card", "year": "2023" }, { "authors": "Andrea Omkar M Parkhi; Andrew Vedaldi; Zisserman; Jawahar", "journal": "", "ref_id": "b25", "title": "Cats and dogs", "year": "2012" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b26", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Zhiyuan Ren; Yiyang Su; Xiaoming Liu", "journal": "", "ref_id": "b27", "title": "Chatgptpowered hierarchical comparisons for image classification", "year": "2023" }, { "authors": "Yongxin Shi; Dezhi Peng; Wenhui Liao; Zening Lin; Xinhong Chen; Chongyu Liu; Yuyi Zhang; Lianwen Jin", "journal": "", "ref_id": "b28", "title": "Exploring ocr capabilities of gpt-4v (ision): A quantitative and in-depth evaluation", "year": "2023" }, { "authors": "Khurram Soomro; Mubarak Amir Roshan Zamir; Shah", "journal": "", "ref_id": "b29", "title": "Ucf101: A dataset of 101 human actions classes from videos in the wild", "year": "2012" }, { "authors": "Hang Su; Subhransu Maji; Evangelos Kalogerakis; Erik Learned-Miller", "journal": "", "ref_id": "b30", "title": "Multi-view convolutional neural networks for 3d shape recognition", "year": "2015" }, { "authors": "Quan Sun; Yuxin Fang; Ledell Wu; Xinlong Wang; Yue Cao", "journal": "", "ref_id": "b31", "title": "Eva-clip: Improved training techniques for clip at scale", "year": "2023" }, { "authors": "Mengmeng Wang; Jiazheng Xing; Yong Liu", "journal": "", "ref_id": "b32", "title": "Actionclip: A new paradigm for video action recognition", "year": "2021" }, { "authors": "Licheng Wen; Xuemeng Yang; Daocheng Fu; Xiaofeng Wang; Pinlong Cai; Xin Li; Tao Ma; Yingxuan Li; Linran Xu; Dengke Shang", "journal": "", "ref_id": "b33", "title": "On the road with gpt-4v (ision): Early explorations of visual-language model on autonomous driving", "year": "2023" }, { "authors": "Wenhao Wu; Haipeng Luo; Bo Fang; Jingdong Wang; Wanli Ouyang", "journal": "", "ref_id": "b34", "title": "Cap4video: What can auxiliary captions do for text-video retrieval?", "year": "2023" }, { "authors": "Wenhao Wu; Zhun Sun; Wanli Ouyang", "journal": "", "ref_id": "b35", "title": "Revisiting classifier: Transferring vision-language models for video recognition", "year": "2023" }, { "authors": "Wenhao Wu; Zhun Sun; Yuxin Song; Jingdong Wang; Wanli Ouyang", "journal": "International Journal of Computer Vision", "ref_id": "b36", "title": "Transferring vision-language models for visual recognition: A classifier perspective", "year": "2023" }, { "authors": "Wenhao Wu; Xiaohan Wang; Haipeng Luo; Jingdong Wang; Yi Yang; Wanli Ouyang", "journal": "", "ref_id": "b37", "title": "Bidirectional cross-modal knowledge exploration for video recognition with pretrained vision-language models", "year": "2023" }, { "authors": "Zhirong Wu; Shuran Song; Aditya Khosla; Fisher Yu; Linguang Zhang; Xiaoou Tang; Jianxiong Xiao", "journal": "", "ref_id": "b38", "title": "3d shapenets: A deep representation for volumetric shapes", "year": "2015" }, { "authors": "Jianxiong Xiao; James Hays; Krista A Ehinger; Aude Oliva; Antonio Torralba", "journal": "", "ref_id": "b39", "title": "Sun database: Large-scale scene recognition from abbey to zoo", "year": "2010" }, { "authors": "Zhengyuan Yang; Linjie Li; Kevin Lin; Jianfeng Wang; Chung-Ching Lin; Zicheng Liu; Lijuan Wang", "journal": "", "ref_id": "b40", "title": "The dawn of lmms: Preliminary explorations with gpt-4v (ision)", "year": "2023" }, { "authors": "Zhichao Yang; Zonghai Yao; Mahbuba Tasmin; Parth Vashisht; Seok Won; Beining Jang; Dan Wang; Hong Berlowitz; Yu", "journal": "medRxiv", "ref_id": "b41", "title": "Performance of multimodal gpt-4v on usmle with image: Potential for imaging diagnostic support with explanations", "year": "2023" }, { "authors": "Renrui Zhang; Ziyu Guo; Wei Zhang; Kunchang Li; Xupeng Miao; Bin Cui; Yu Qiao; Peng Gao; Hongsheng Li", "journal": "", "ref_id": "b42", "title": "Pointclip: Point cloud understanding by clip", "year": "2021" }, { "authors": "Peilin Zhou; Meng Cao; You-Liang Huang; Qichen Ye; Peiyan Zhang; Junling Liu; Yueqi Xie; Yining Hua; Jaeboum Kim", "journal": "", "ref_id": "b43", "title": "Exploring recommendation capabilities of gpt-4v (ision): A preliminary case study", "year": "2023" }, { "authors": "Deyao Zhu; Jun Chen; Xiaoqian Shen; Xiang Li; Mohamed Elhoseiny", "journal": "", "ref_id": "b44", "title": "Minigpt-4: Enhancing vision-language understanding with advanced large language models", "year": "2023" }, { "authors": "Xiangyang Zhu; Renrui Zhang; Bowei He; Ziyu Guo; Ziyao Zeng; Zipeng Qin; Shanghang Zhang; Peng Gao", "journal": "", "ref_id": "b45", "title": "Pointclip v2: Prompting clip and gpt for powerful 3d open-world learning", "year": "2022" } ]
[]
2023-11-27
[ { "figure_ref": [], "heading": "Abstract", "publication_ref": [], "table_ref": [], "text": "It is well known that many open-released foundational diffusion models have difficulty in generating images that substantially depart from average brightness, despite such images being present in the training data. This is due to an inconsistency: while denoising starts from pure Gaussian noise during inference, the training noise schedule retains residual data even in the final timestep distribution, due to difficulties in numerical conditioning in mainstream formulation, leading to unintended bias during inference. To mitigate this issue, certain ϵ-prediction models are combined with an ad-hoc offset-noise methodology. In parallel, some contemporary models have adopted zeroterminal SNR noise schedules together with v-prediction, which necessitate major alterations to pre-trained models. However, such changes risk destabilizing a large multitude of community-driven applications anchored on these pretrained models. In light of this, our investigation revisits the fundamental causes, leading to our proposal of an innovative and principled remedy, called One More Step (OMS). By integrating a compact network and incorporating an additional simple yet effective step during inference, OMS elevates image fidelity and harmonizes the dichotomy between training and inference, while preserving original model parameters. Once trained, various pre-trained diffusion models with the same latent domain can share the same OMS module. Codes and models are released at here." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b8", "b34", "b13", "b4", "b17", "b8", "b2", "b9", "b26", "b17", "b8", "b36", "b21" ], "table_ref": [], "text": "Diffusion models have emerged as a foundational method for improving quality, diversity, and resolution of generated images [9,35], due to the robust generalizability and straightforward training process. At present, a series of open-source diffusion models, exemplified by Stable Diffusion [30], hold significant sway and are frequently cited within the community. Leveraging these open-source models, numerous researchers and artists have either directly adapted [14,39] or employed other techniques [11] to finetune and craft an array of personalized models.\nHowever, recent findings by Karras et al. [15], Lin et al. [18] identified deficiencies in existing noise schedules, leading to generated images primarily characterized by medium brightness levels. Even when prompts include explicit color orientations, the generated images tend to gravitate towards a mean brightness. Even when prompts specify \"a solid black image\" or \"a pure white background\", the models will still produce images that are obviously incongruous with the provided descriptions (see examples in Fig. 1). We deduced that such inconsistencies are caused by a divergence between inference and training stages, due to inadequacies inherent in the dominant noise schedules. In detail, during the inference procedure, the initial noise is drawn from a pure Gaussian distribution. In contrast, during the training phase, previous approaches such as linear [9] and cosine [26] schedules manifest a non-zero SNR at the concluding timestep. This results in low-frequency components, especially the mean value, of the training dataset remaining residually present in the final latents during training, to which the model learns to adapt. However, when presented with pure Gaussian noise during inference, the model behaves as if these residual components are still present, resulting in the synthesis of suboptimal imagery [3,10].\nIn addressing the aforementioned issue, Guttenberg and CrossLabs [7] first proposed a straightforward solution: introducing a specific offset to the noise derived from sampling, thereby altering its mean value. This technique has been designated as offset noise. While this methodology has been employed in some of the more advanced models [27], it is not devoid of inherent challenges. Specifically, the incorporation of this offset disrupts the iid distribution characteristics of the noise across individual units. Consequently, although this modification enables the model to produce images with high luminance or profound darkness, it might inadvertently generate signals incongruent with the distribution of the training dataset. A more detailed study [18] suggests a zero terminal SNR method that rescaling the model's schedule to ensure the SNR is zero at the terminal timestep can address this issue. Nonetheless, this strategy necessitates the integration of v-prediction models [33] and mandates subsequent fine-tuning across the entire network, regardless of whether the network is based on v-prediction or ϵ-prediction [9]. Besides, fine-tuning these widely-used pre-trained models would render many community models based on earlier releases incompatible, diminishing the overall cost-to-benefit ratio.\nTo better address this challenge, we revisited the reasons for its emergence: flaws in the schedule result in a mismatch between the marginal distributions of terminal noise during the training and inference stages. Concurrently, we found the distinct nature of this terminal timestep: the latents predicted by the model at the terminal timestep continue to be associated with the data distribution.\nBased on the above findings, we propose a plug-and-play method, named One More Step, that solves this problem without necessitating alterations to the pre-existing trained models, as shown in Fig. 1. This is achieved by training an auxiliary text-conditional network tailored to map pure Gaussian noise to the data-adulterated noise assumed by the pre-trained model, optionally under the guidance of an additional prompt, and is introduced prior to the inception of the iterative sampling process.\nOMS can rectify the disparities in marginal distributions encountered during the training and inference phases. Additionally, it can also be leveraged to adjust the generated images through an additional prompt, due to its unique property and position in the sampling sequence. It is worth noting that our method exhibits versatility, being amenable to any variance-preserving [37] diffusion framework, irrespective of the network prediction type, whether ϵ-prediction or v-prediction, and independent of the SDE or ODE solver employed. Experiments demonstrate that SD1.5, SD2.1, LCM [22] and other popular community models can share the same OMS module for improved image generation." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Diffusion Model and its Prediction Types", "publication_ref": [ "b8", "b34", "b36", "b8" ], "table_ref": [], "text": "We consider diffusion models [9,35] specified in discrete time space and variance-preserving (VP) [37] formulation. Given the training data x ∈ p(x), a diffusion model performs the forward process to destroy the data x 0 into noise x T according to the pre-defined variance schedule {β t } T t=1 according to a perturbation kernel, defined as:\nq(x 1:T |x 0 ) := T t=1 q(x t |x t-1 ),(1)\nq(x t |x t-1 ) := N x t ; 1 -β t x t-1 , β t I .(2)\nThe forward process also has a closed-form equation, which allows directly sampling x t at any timestep t from x 0 :\nq(x t |x 0 ) := N (x t ; √ ᾱt x 0 , (1 -ᾱt )I),(3)\nwhere ᾱt = t s=1 α s and α t = 1 -β t . Furthermore, the signal-to-noise ratio (SNR) of the latent variable can be defined as:\nSNR(t) = ᾱt /(1 -ᾱt ).\n(4)\nThe reverse process denoises a sample x T from a standard Gaussian distribution to a data sample x 0 following:\np θ (x t-1 |x t ) := N (x t-1 ; μt , σ2 t I). (5\n) μt := √ ᾱt-1 β t 1 -ᾱt x 0 + √ α t (1 -ᾱt-1 ) 1 -ᾱt x t(6)\nInstead of directly predicting μt using a network θ, predicting the reparameterised ϵ for x 0 leads to a more stable result [9]:\nx0 := (x t - √ 1 -ᾱt ϵ θ (x t , t))/ √ ᾱt (7)\nand the variance of the reverse process σ2 t is set to be σ 2 t = 1-ᾱt-1 1-ᾱt β t while x t ∼ N (0, 1). Additionally, predicting velocity [33] is another parameterisation choice for the network to predict:\nv t := √ ᾱt ϵ - √ 1 -ᾱt x 0 ;(8)\nwhich can reparameterise x0 as:\nx0 := √ ᾱt x t - √ 1 -ᾱt v θ (x t , t)(9)" }, { "figure_ref": [], "heading": "Offset Noise and Zero Terminal SNR", "publication_ref": [ "b17" ], "table_ref": [], "text": "Offset noise [7] is a straightforward method to generate dark or light images more effectively by fine-tuning the model with modified noise. Instead of directly sampling a noise from standard Gaussian Distribution ϵ ∼ N (0, I), one can sample the initial noise from\nϵ ∼ N (0, I + 0.1Σ),(10)\nwhere Σ is a covariance matrix of all ones, representing fully correlated dimensions. This implies that the noise bias introduced to pixel values across various channels remains consistent. In the initial configuration, the noise attributed to each pixel is independent, devoid of coherence. By adding a common noise across the entire image (or along channels), changes can be coordinated throughout the image, facilitating enhanced regulation of low-frequency elements. However, this is an unprincipled ad hoc adjustment that inadvertently leads to the noise mean of inputs deviating from representing the mean of the actual image.\nA different research endeavor proposes a more fundamental approach to mitigate this challenge [18]: rescaling the beta schedule ensures that the low-frequency information within the sampled latent space during training is thoroughly destroyed. To elaborate, current beta schedules are crafted with an intent to minimize the SNR at x T . However, constraints related to model intricacies and numerical stability preclude this value from reaching zero. Given a beta schedule used in LDM [30]:\nβ t = √ 0.00085 T -t T -1 + √ 0.012 t -1 T -1 2 ,(11)\nthe terminal SNR at timestep T = 1000 is 0.004682 and √ ᾱT is 0.068265. To force terminal SNR=0, rescaling can be done to make ᾱT = 0 while keeping ᾱ0 fixed. Subsequently, this rescaled beta schedule can be used to finetune the model to avoid the information leakage. Concurrently, to circumvent the numerical instability induced by the prevalent ϵ-prediction at zero terminal SNR, this work mandates the substitution of prediction types across all timesteps with v-prediction. However, such approaches cannot be correctly applied for sampling from pre-trained models that are based on Eq. 11." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1" ], "heading": "Discrepancy between Training and Sampling", "publication_ref": [ "b1" ], "table_ref": [], "text": "From the beta schedule in Eq. 11, we find the SNR cannot reach zero at terminal timestep as ᾱT is not zero. Substituting the value of ᾱT in Eq. 3, we can observe more intuitively that during the training process, the latents sampled by the model at T deviate significantly from expected values:\nx T T = ᾱT T x 0 + 1 -ᾱT T z,(12)\nwhere ᾱT T = 0.068265 and 1 -ᾱT T = 0.997667. During the training phase, the data fed into the model is not entirely pure noise at timestep T . It contains minimal yet data-relevant signals. These inadvertently introduced signals contain low-frequency details, such as the overall mean of each channel. The model is subsequently trained to denoise by respecting the mean in the leaked signals. However, in the inference phase, sampling is executed using standard Gaussian distribution. Due to such an inconsistency in the distribution between training and inference, when given the zero mean of Gaussian noise, the model unsurprisingly produces samples with the mean value presented at T , resulting in the manifestation of images with median values. Mathematically, the directly sampled variable x S T in the inference stage adheres to the standard Gaussian distribution N (0, I). However, the marginal distribution of the forward process from image space X to the latent space x T T during training introduces deviations of the lowfrequency information, which is non-standard Gaussian distribution.\nThis discrepancy is more intuitive in the visualization of high-dimensional Gaussian space by estimating the radius r [41], which is closely related to the expected distance of a random point from the origin of this space. Theoretically, given a point x = (x 1 , x 2 , . . . , x d ) sampled within the Gaussian domain spanning a d-dimensional space, the squared length or the norm of x inherently denotes the squared distance from this point to the origin according to:\nE(x 2 1 + x 2 2 + • • • + x 2 d ) = dE(x 2 1 ) = dσ 2 ,(13)\nand the square root of the norm is Gaussian radius r. When this distribution is anchored at the origin with its variance represented by σ, its radius in Gaussian space is determined by:\nr = σ √ d,(14)\nthe average squared distance of any point randomly selected from the Gaussian distribution to the origin. Subsequently, we evaluated the radius within the high-dimensional space for both the variables present during the training phase r T and those during the inference phase r S , considering various beta schedules, the results are demonstrated in Tab. 1. Additionally, drawing from [2,41], we can observe that the concentration mass of the Gaussian sphere resides above the equator having a radius magnitude of O r √ d , also within an annulus of constant width and radius n. Therefore, we can roughly visualize the distribution of terminal variables during both the training and inference processes in Fig. 2. It can be observed that a discernible offset emerges between the terminal distribution x T T and x S T and r S > r T . This intuitively displays the discrepancy between training and inference, which is our primary objective to mitigate. Additional theoretical validations are relegated to the Appendix B for reference. 1. Estimation of the Gaussian radius during the sampling and inference phases under different beta schedules. Here, we randomly sampled 20,000 points to calculate the radius." }, { "figure_ref": [], "heading": "Prediction at Terminal Timestep", "publication_ref": [ "b8" ], "table_ref": [], "text": "According to Eq. 5 & 7, we can obtain the sampling process under the text-conditional DDPM pipeline with ϵ-prediction at timestep T : where z, x T ∼ N (0, I). In this particular scenario, it is obvious that the ideal SNR(T ) = 0 setting (with α T = 0) will lead to numerical issues, and any predictions made by the network at time T with an SNR(T ) = 0 are arduous and lack meaningful interpretation. This also elucidates the necessity for the linear schedule to define its start and end values [9] and for the cosine schedule to incorporate an offset s [26].\nx T -1 = 1 √ α T x T - 1 -α T √ 1 -ᾱT ϵ θ + σ T z,(15)\nUtilizing SNR-independent v-prediction can address this issue. By substituting Eq. 9 into Eq. 5, we can derive:\nx T -1 = √ α T x T - √ ᾱT -1 (1 -α T ) √ 1 -ᾱT v θ + σ T z,(16)\nwhich the assumption of SNR(T ) = 0 can be satisfied: when SNR(T ) = 0, the reverse process of calculating x T -1 depends only on the prediction of v θ (x T , T ),\nx T -1 = - √ ᾱT -1 v θ + σ T z,(17)\nwhich can essentially be interpreted as predicting the direction of x 0 according to Eq. 8:\nx T -1 = √ ᾱT -1 x 0 + σ T z.(18)\nThis is also consistent with the conclusions of angular parameterisation 1 [33]. To conclude, under the ideal condition of SNR = 0, the model is essentially forecasting the L2 mean of the data, hence the objective of the vprediction at this stage aligns closely with that of the direct x 0 -prediction. Furthermore, this prediction by the network at this step is independent of the pipeline schedule, implying that the prediction remain consistent irrespective of the variations in noise input." }, { "figure_ref": [], "heading": "Adding One More Step", "publication_ref": [ "b7" ], "table_ref": [], "text": "Holding the assumption that x T belongs to a standard Gaussian distribution, the model actually has no parameters to be trained with pre-defined beta schedule, so the objective L T should be the constant:\nL T = D KL (q(x T |x 0 )∥p(x T )) . (19\n)\nIn the present architecture, the model conditioned on x T actually does not participate in the training. However, existing models have been trained to predict based on x T T , which indeed carries some data information.\nDrawing upon prior discussions, we know that the model's prediction conditioned on x S T should be the average of the data, which is also independent of the beta schedule. This understanding brings a new perspective to the problem: retaining the whole pipeline of the current model, encompassing both its parameters and the beta schedule. In contrast, we can reverse x S T to x T T by introducing One More Step (OMS). In this step, we first train a network ψ(x S T , C) to perform v-prediction conditioned on x S T ∼ N (0, I) with L2 loss ∥v S T -ṽ S T ∥ 2 2 , where v S T = -x 0 and ṽS T is the prediction from the model. Next, we reconstruct xT T based on the output of ψ with different solvers. In addition to the SDE Solver delineated in Eq. 17, we can also leverage prevalent ODE Solvers, e.g., DDIM [36]:\nxT T = ᾱT T x0 + 1 -ᾱT T -σ 2 T x S T + σ T z, (20\n)\nwhere Notably, the prompt C ψ in OMS phase ψ(•) can be different from the conditional information C θ for the pre-trained diffusion model θ(•). Modifying the prompt in OMS phase allows for additional manipulation of low-frequency aspects of the generated image, such as color and luminance. Besides, OMS module also support classifier free guidance [8] to strength the text condition:\nψ cfg (x S T , C ψ , ∅, ω ψ ) = ψ(x S T , ∅)+ω ψ ψ(x S T , C ψ ) -ψ(x S T , ∅) , (21)\nwhere ω ψ is the CFG weights for OMS. Experimental results for inconsistent prompt and OMS CFG can be found in Sec. 4.3. Step. While directly sampling method requires sampling from a Gaussian distribution with a radius of r T , yet it samples from the standard Gaussian with r S in practice. OMS bridges the gap ∆r between r S and the required r T through an additional inference step. Here n is the width of the narrow band where the distribution mass is concentrated.\nIt is worth noting that OMS can be adapted to any pretrained model within the same space. Simply put, our OMS module trained in the same VAE latent domain can adapt to any other model that has been trained within the same latent space and data distribution. Details of the OMS and its versatility can be found in Appendix D.2 & D.4." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "This section begins with an evaluation of the enhancements provided by the proposed OMS module to pre-trained generative models, examining both qualitative and quantitative aspects, and its adaptability to a variety of diffusion models. Subsequently, we conducted ablation studies on pivotal designs and dive into several interesting occurrences." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b33", "b8", "b30" ], "table_ref": [], "text": "We trained our OMS module on LAION 2B dataset [34]. OMS module architecture follows the widely used UNet [9,31] in diffusion, and we evaluated different configurations, " }, { "figure_ref": [ "fig_5", "fig_6" ], "heading": "Performance", "publication_ref": [ "b21", "b18", "b27", "b37", "b15", "b15" ], "table_ref": [], "text": "Qualitative Figs. 1 and 5 illustrate that our approach is capable of producing images across a large spectrum of brightness levels. Among these, SD1.5, SD2.1 and LCM [22] use the same OMS module, whereas SDXL employs a separately trained OMS module 2 . As shown in the Fig. 5 left, existing models invariably yield samples of medium brightness and are not able to generate accurate images when provided with explicit prompts. In contrast, our model generates a distribution of images that is more broadly covered based on the prompts. In addition to further qualifying the result, we also show some integration of the widely popular customized LoRA [11] and base models in the community with our module in Appendix. E, which also ascertains the versatility of OMS.\nQuantitative For the quantitative evaluation, we randomly selected 10k captions from MS COCO [19] for zeroshot generation of images. We used Fréchet Inception Distance (FID), CLIP Score [28], Image Reward [38], and PickScore [16] to assess the quality, text-image alignment, and human preference of generated images. Tab. 2 presents a comparison of these metrics across various models, either with or without the integration of the OMS module. It is worth noting that Kirstain et al. [16] demonstrated that the FID score for COCO zero-shot image generation has a negative correlation with visual aesthetics, thus the FID metric is not congruent with the goals of our study. Instead, we have further computed the Precision-Recall (PR) [17] and Density-Coverage (DC) [24] between the ground truth images and those generated, as detailed in the Tab. 2. Additionally, we calculate the mean of images and the Wasserstein distance [32], and visualize the log-frequency distribution in Fig. 6. It is evident that our proposed OMS module promotes a more broadly covered distribution." }, { "figure_ref": [ "fig_7", "fig_7", "fig_9" ], "heading": "Ablation", "publication_ref": [ "b0", "b7" ], "table_ref": [], "text": "Module Scale Initially, we conducted some research on the impact of model size. The aim is to explore whether variations in the parameter count of the OMS model would influence the enhancements in image quality. We experimented with OMS networks of three different sizes and discovered that the amelioration of image quality is not sensitive to the number of OMS parameters. From Tab. 4 in Appendix D.3, we found that even with only 3.7M parameters, 2 The VAE latent domain of the SDXL model differs considerably from those of SD1.5, SD2.1 and LCM. For more detailed information, please refer to the Appendix. D.4 (a) Modifying the prompts in the OMS module can adjust the brightness in the generated images. the model was still able to successfully improve the distribution of generated images. This result offers us an insight: it is conceivable that during the entire denoising process, certain timesteps encounter relatively trivial challenges, hence the model scale of specific timestep might be minimal and using a Mixture of Experts strategy [1] but with different scale models at diverse timesteps may effectively reduce the time required for inference.\nText Encoder Another critical component in OMS is the text encoder. Given that the OMS model's predictions can be interpreted as the mean of the data informed by the prompt, it stands to reason that a more potent text encoder would enhance the conditional information fed into the OMS module. However, experiments show that the improvement brought by different encoders is also limited. We believe that the main reason is that OMS is only effective for Modified Prompts In addition to providing coherent prompts, we also conducted experiments to examine the impact of the low-frequency information during the OMS step with different prompts, mathematically C ψ ̸ = C θ . We discovered that the brightness level of the generated images can be easily controlled with terms like C ψ is \"dark\" or \"light\" in the OMS phase, as can be seen from Fig. 7a. Additionally, our observations indicate that the modified prompts used in the OMS are capable of influencing other semantic aspects of the generated content, including color variations as shown in Fig. 7b.\nClassifier-free guidance Classifier-free guidance (CFG) is well-established for enhancing the quality of generated content and is a common practice [8]. CFG still can play a key component in OMS, effectively influencing the lowfrequency characteristics of the image in response to the given prompts. Due to the unique nature of our OMS target for generation, the average value under ∅ is close to that of conditioned ones C ψ . As a result, even minor applications of CFG can lead to considerable changes. Our experiments show that a CFG weight ω ψ = 2 can create distinctly visible alterations. In Fig. 8, we can observe the performance of generated images under different CFG weights for OMS module. It worth noting that CFG weights of OMS and the pre-trained model are imposed independently." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In summary, our observations indicate a discrepancy in the terminal noise between the training and sampling stages of diffusion models due to the schedules, resulting in a distribution of generated images that is centered around the mean. To address this issue, we introduced One More Step, which adjusts for the training and inference distribution discrepancy by integrating an additional module while preserving the original parameters. Furthermore, we discovered that the initial stages of the denoising process with low SNR largely determine the low-frequency traits of the images, particularly the distribution of brightness, and this phase does not demand an extensive parameter set for accurate model fitting." }, { "figure_ref": [], "heading": "One More Step: A Versatile Plug-and-Play Module for Rectifying Diffusion Schedule Flaws and Enhancing Low-Frequency Controls", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Related Works", "publication_ref": [ "b8", "b36", "b0", "b11", "b28", "b0", "b28", "b4", "b19", "b21", "b7", "b5" ], "table_ref": [], "text": "Diffusion models [9,37] have significantly advanced the field of text-to-image synthesis [1,12,13,25,29,30]. These models often operate within the latent space to optimize computational efficiency [30] or initially generate lowresolution images that are subsequently enhanced through super-resolution techniques [1,29]. Recent developments in fast sampling methods have notably decreased the diffusion model's generation steps from hundreds to just a few [15,[20][21][22]36]. Moreover, incorporating classifier guidance during the sampling phase significantly improves the quality of the results [4]. While classifier-free guidance is commonly used [8], exploring other guidance types also presents promising avenues for advancements in this domain [6,40]." }, { "figure_ref": [], "heading": "B. High Dimensional Gaussian", "publication_ref": [ "b1" ], "table_ref": [], "text": "In our section, we delve into the geometric and probabilistic features of high-dimensional Gaussian distributions, which are not as evident in their low-dimensional counterparts. These characteristics are pivotal for the analysis of latent spaces within denoising models, given that each intermediate latent space follows a Gaussian distribution during denoising. Our statement is anchored on the seminal work by [2,41]. These works establish a connection between the high-dimensional Gaussian distribution and the latent variables inherent in the diffusion model.\nProperty B.1 For a unit-radius sphere in high dimensions, as the dimension d increases, the volume of the sphere goes to 0, and the maximum possible distance between two points stays at 2." }, { "figure_ref": [], "heading": "Lemma B.2", "publication_ref": [], "table_ref": [], "text": "The surface area A(d) and the volume V (d) of a unit-radius sphere in d-dimensions can be obtained by:\nA(d) = 2π d/2 Γ(d/2) , V (d) = π d/2 d 2 Γ(d/2) , (22\n)\nwhere Γ(x) represents an extension of the factorial function to accommodate non-integer values of x, the aforementioned Property B.1 and Lemma B.2 constitute universal geometric characteristics pertinent to spheres in higherdimensional spaces. These principles are not only inherently relevant to the geometry of such spheres but also have significant implications for the study of high-dimensional Gaussians, particularly within the framework of diffusion models during denoising process." }, { "figure_ref": [], "heading": "Property B.3", "publication_ref": [], "table_ref": [], "text": "The volume of a high-dimensional sphere is essentially all contained in a thin slice at the equator and is simultaneously contained in a narrow annulus at the surface, with essentially no interior volume. Similarly, the surface area is essentially all at the equator.\nThe Property B.3 implies that samples from x S T are falling into a narrow annulus.\nLemma B.4 For any c > 0, the fraction of the volume of the hemisphere above the plane\nx 1 = c √ d-1 is less than 2 c e -c 2 2 .\nLemma B.5 For a d-dimensional spherical Gaussian of variance 1, all but 4 c 2 e -c 2 /4 fraction of its mass is within the annulus T compared to the practical sampling spaces x T T derived from various schedules, which should share an identical radius ideally.\n√ d -1 -c ≤ r ≤ √ d -1 + c for any c > 0." }, { "figure_ref": [], "heading": "Property B.6", "publication_ref": [], "table_ref": [], "text": "The maximum likelihood spherical Gaussian for a set of samples is the one over center equal to the sample mean and standard deviation equal to the standard deviation of the sample.\nThe above Property B.6 provides the theoretical foundation whereby the mean of squared distances serves as a robust statistical measure for approximating the radius of highdimensional Gaussian distributions." }, { "figure_ref": [ "fig_19", "fig_11" ], "heading": "C. Expression of DDIM in angular parameterization", "publication_ref": [], "table_ref": [], "text": "The following covers derivation that was originally presented in [33], with some corrections. We can simplify the DDIM update rule by expressing it in terms of ϕ t = arctan(σ t /α t ), rather than in terms of time t or log-SNR λ t , as we show here. Given our definition of ϕ, and assuming a variance preserving diffusion process, we have α ϕ = cos(ϕ), σ ϕ = Figure 9. The same set of configurations (SDXL w/ LCM-LoRA with 4(+1) Steps) as Fig. 13 but with different random seeds. SDXL with LCM-LoRA leans towards black-and-white images, but OMS produces more colorful images. It is worth noting the mean value of all SDXL with LCM-LoRA results is 0.24 while the average value of OMS results is 0.17. We hypothesize the tendency of SDXL to produce black-and-white images is a direct result of flaws in its scheduler for training. sin(ϕ), and hence z ϕ = cos(ϕ)x + sin(ϕ)ϵ. We can now define the velocity of z ϕ as\nv ϕ ≡ dz ϕ dϕ = d cos(ϕ) dϕ x + d sin(ϕ) dϕ ϵ = cos(ϕ)ϵ -sin(ϕ)x.(23)\nRearranging ϵ, x, v, we then get:\nsin(ϕ)x = cos(ϕ)ϵ -v ϕ = cos(ϕ) sin(ϕ) (z -cos(ϕ)x) -v ϕ(24)\nsin 2 (ϕ)x = cos(ϕ)z -cos 2 (ϕ)x -sin(ϕ)v ϕ(25)\n(sin 2 (ϕ) + cos 2 (ϕ))x = x = cos(ϕ)z -sin(ϕ)v ϕ ,(26)\nand similarly we get ϵ = sin(ϕ)z ϕ + cos(ϕ)v ϕ . Furthermore, we define the predicted velocity as:\nvθ (z ϕ ) ≡ cos(ϕ)ε θ (z ϕ ) -sin(ϕ)x θ (z ϕ ),(27)\nwhere εθ (z ϕ ) = (z ϕ -cos(ϕ)x θ (z ϕ ))/ sin(ϕ).\nRewriting the DDIM update rule in the introduced terms then gives:\nz ϕs = cos(ϕ s )x θ (z ϕt ) + sin(ϕ s )ε θ (z ϕt ) = cos(ϕ s )(cos(ϕ t )z ϕt -sin(ϕ t )v θ (z ϕt ))+ sin(ϕ s )(sin(ϕ t )z ϕt + cos(ϕ t )v θ (z ϕt )) =[cos(ϕ s ) cos(ϕ t )+ sin(ϕ s ) sin(ϕ t )]z ϕt + [sin(ϕ s ) cos(ϕ t ) -cos(ϕ s ) sin(ϕ t )]v θ (z ϕt ).(28)\nFinally, we use the trigonometric identities\ncos(ϕ s ) cos(ϕ t ) + sin(ϕ s ) sin(ϕ t ) = cos(ϕ s -ϕ t ) sin(ϕ s ) cos(ϕ t ) -cos(ϕ s ) sin(ϕ t ) = sin(ϕ s -ϕ t ),(29)\nto find that3 \nz ϕs = cos(ϕ s -ϕ t )z ϕt + sin(ϕ s -ϕ t )v θ (z ϕt ). (30\n)\nor equivalently\nz ϕt-δ = cos(δ)z ϕt -sin(δ)v θ (z ϕt ).(31)\nViewed from this perspective, DDIM thus evolves z ϕs by moving it on a circle in the (z ϕt , vϕt ) basis, along the -v ϕt direction. When SNR is set to zero, the v-prediction effectively reduces to the x 0 -prediction. The relationship between z ϕt , v t , α t , σ t , x, ϵ is visualized in Fig. 10. Due to space limitations, we omitted some implementation details in the main body, but we provided a detailed version of the OMS based on DDIM sampling in Alg. 1. This example implementation utilizes v-prediction for the OMS and ϵ-prediction for the pre-trained model. The derivation related to prediction of xT T in Eq. 20 can be obtained from Eq.12 in [36]. Given x t , one can generate x 0 : . In OMS phase, ᾱS T = 0 and ᾱS T -1 = ᾱT T . According to Eq. 9, the OMS module ψ(•) directly predict the direction v of the data, which is equal to -x S 0 :\nxt-1 = √ ᾱt-1 xt - √ ᾱt εt √ ᾱt + 1 -ᾱt-1 -σ 2 t εt +σ t z,(32)\nxS 0 := -v ψ (x S T , C).(33)\nApplying these conditions to Eq. 32 yields the following:\nxT T = ᾱT T xS 0 + 1 -ᾱT T -σ 2 x S T + σz(34)" }, { "figure_ref": [], "heading": "D.2. Additional Comments", "publication_ref": [], "table_ref": [], "text": "Alternative training targets for OMS As we discussed in 3.2, the objective of v-prediction at SNR=0 scenario is exactly the same as negative x 0 -prediction. Thus we can also train the OMS module under the L2 loss between ∥x 0 -x0 ∥ 2 2 , where the OMS module directly predict x0 = ψ(x S T , C). \n∼ N (0, I) if t > 1, else z = 0; if t=T then if ω θ > 1 then εT = θ cfg (x T T , C θ , ∅, ω θ ) ; else εT = θ(x T t , C θ ) ; end xT -1 = √ ᾱT -1 xT T - √ 1-ᾱT T εT √ ᾱT T + 1 -ᾱT -1 -σ 2 εT + σz ; else if ω θ > 1 then εt = θ cfg (x t , C θ , ∅, ω θ ) ; else εt = θ(x T t , C θ ) ; end xt-1 = √ ᾱt-1 xt- √ 1-ᾱtεt √ ᾱt + 1 -ᾱt-1 -σ 2 εt + σz end end return x0\nReasons behind versatility The key point is revealed in Eq. 20. The target prediction of OMS module is only focused on the conditional mean value x0 , which is only related to the training data. x S T is directly sampled from normal distribution, which is independent. Only ᾱT is unique to other pre-defined diffusion pipelines, but it is nonparametric. Therefore, given an x S T and an OMS module ψ, we can calculate any x T T that aligns with the pre-trained model schedule according to Eq. 20. Consistent generation Additionally, our study demonstrates that the OMS can significantly enhance the coherence and continuity between the generated images, which aligns with the discoveries presented in recent research [5] to improve the coherence between frames in the video generation process." }, { "figure_ref": [], "heading": "D.3. Implementation Details", "publication_ref": [ "b33" ], "table_ref": [], "text": "Dataset The proposed OMS module and its variants were trained on the LAION 2B dataset [34] without employing any specific filtering operation. All the training images are first resized to 512 pixels by the shorter side and then randomly cropped to dimensions of 512 × 512, along with a random flip. Notably, for the model trained on the pretrained SDXL, we utilize a resolution of 1024. Additionally, we conducted experiments on LAION-HR images with an aesthetic score greater than 5.8. However, we observed that the high-quality dataset did not yield any improvement. This suggests that the effectiveness of our model is independent of data quality, as OMS predicts the mean of training data conditioned on the prompt." }, { "figure_ref": [], "heading": "OMS scale variants", "publication_ref": [], "table_ref": [ "tab_2", "tab_3" ], "text": "We experiment with OMS modules at three different scales, and the detailed settings for each variants are shown in Table 3. Combining these with three different text encoders results in a total of nine OMS modules with different parameters. As demonstrated in Table 4, we found that OMS is not sensitive to the number of parameters and the choice of text encoder used to extract text embeddings for the OMS network." }, { "figure_ref": [], "heading": "Hyper-parameters", "publication_ref": [], "table_ref": [], "text": "In our experiments, we employed the AdamW optimizer with β 1 = 0.9, β 2 = 0.999, and a weight decay of 0.01. " }, { "figure_ref": [ "fig_13", "fig_16", "fig_13" ], "heading": "D.4. OMS Versatility and VAE Latents Domain", "publication_ref": [], "table_ref": [], "text": "The output of the OMS model is related to the training data of the diffusion phase. If the diffusion model is trained in the image domain, then our image domain-based OMS can be widely applied to these pre-trained models. However, the more popular LDM model has a VAE as the first stage that compresses the pixel domain into a latent space. For different LDM models, their latent spaces are not identical. In such cases, the training data for OMS is actually the latent compressed by the VAE Encoder. Therefore, our OMS model is versatile for pre-trained LDM models within the same VAE latent domain, e.g., SD1.5, SD2.1 and LCM. Our analysis reveals that the VAEs in SD1.5, SD2.1, and LCM exhibit a parameter discrepancy of less than 1e-4 and are capable of accurately restoring images. Therefore, we consider that these three are trained diffusion models in the same latent domain and can share the same OMS module. However, for SDXL, our experiments found significant deviations in the reconstruction process, especially in more extreme cases as shown in Fig. 11. Therefore, the OMS module for SDXL needs to be trained separately. But it can still be compatible with other models in the community based on SDXL. If we forcibly use the OMS trained with the VAE of the SD1.5 series on the base model of SDXL, severe color distortion will occur whether we employ latents with unit variance. We demonstrate some practical distortion case with the rescaled unit variance space in Fig. 12. The observed color shift aligns with the effect shown in Fig. 11, e.g., Black → Red. " }, { "figure_ref": [], "heading": "E. More Experimental Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_21", "fig_23" ], "heading": "E.1. LoRA and Community Models", "publication_ref": [ "b22" ], "table_ref": [], "text": "In this experiment, we selected a popular community model GhostMix 2.0 BakedVAE4 and a LoRA MoXin 1.05 . In Fig. 14 & Fig. 15, we see that the OMS module can be applied to many scenarios with obvious effects. LoRA scale is set as 0.75 in the experiments. We encourage readers to adopt our method in a variety of well-established opensource models to enhance the light and shadow effects in generated images.\nWe also do some experiment on LCM-LoRA [23] with SDXL for fast inference. The OMS module is the same as we used for SDXL." }, { "figure_ref": [ "fig_13", "fig_13" ], "heading": "E.2. Additional Results", "publication_ref": [], "table_ref": [], "text": "Here we demonstrate more examples based on SD1.5 Fig. 16, SD2.1 Fig. 17 " }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "We believe that the OMS module can be integrated into the student model through distillation, thereby reducing the cost of the additional step. Similarly, in the process of training from scratch or fine-tuning, we can also incorporate the OMS module into the backbone model, only needing to assign a pseudo-t condition to the OMS. However, doing so would lead to changes in the pre-trained model parameters, and thus is not included in the scope of discussion of this work. " } ]
octane, unreal
One More Step: A Versatile Plug-and-Play Module for Rectifying Diffusion Schedule Flaws and Enhancing Low-Frequency Controls
[ { "figure_caption": "AFigure 1 .1Figure 1. Example results of our One More Step method on various sceceries. Traditional sampling methods (Top row) not only lead to (a) generated images converging towards the mean value, but also cause (b) the structure of generated objects to be chaotic, or (c) the theme to not follow prompts. Our proposed One More Step addresses these problems effectively without modifying any parameters in the pre-trained models. Avg. denotes the average pixel value of the images, which are normalized to fall within the range of [0, 1].", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure2. The geometric illustration of concentration mass in the equatorial cross-section of high-dimensional Gaussians, where its mass concentrates in a very small annular band around the radius. Different colors represent the results sampled based on different schedules. It can be seen that as the SNR increases, the distribution tends to be more data-centric, thus the radius of the distribution is gradually decreasing.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "x0 is obtained based on ψ(x S T , C). Subsequently, xT T can be utilized as the initial noise and incorporated into various pre-trained models. From a geometrical viewpoint, we employ a model conditioned on x S T to predict xT T that aligns more closely with N ᾱT T x 0 , (1 -ᾱT T )I , which has a smaller radius and inherits to the training phase of the pretrained model at timestep T . The whole pipeline and geometric explanation is demonstrated in Figs. 3 & 4, and the detailed algorithm and derivation can be referred to Alg. 1 in Appendix D.1.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 .Figure 4 .34Figure 3. The pipeline of One More Step. The section highlighted in yellow signifies our introduced OMS module, with ψ being the only trainable component. The segments in blue represents latent vectors, and green represents the pre-trained model used only for the inference.", "figure_data": "", "figure_id": "fig_3", "figure_label": "34", "figure_type": "figure" }, { "figure_caption": "(a) Images are sampled by DDIM with 50+1 Steps. Images are sampled by LCM with 4+1 Steps and the same prompts sets.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Qualitative comparison.For each image pair, the left shows results from original pre-trained diffusion models, whereas the right demonstrates the output from these same models enhanced with the OMS under identical prompts. It is worth noting that SD1.5, SD2.1 [30] and LCM[22] in this experiment share the same OMS module, rather than training an exclusive module for each one. .", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Log-frequency histogram of image mean values.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Altering the prompts in the OMS module, while keeping the text prompts in the diffusion backbone model constant, can notably affect the characteristics of the images generated.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Alion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k.A white bedroom.", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Images under the same prompt but with different OMS CFG weights applied in OMS module. Notably, CFG weight of the pre-trained diffusion model remains 7.5.", "figure_data": "", "figure_id": "fig_9", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Lemmas B.4 & B.5 imply the volume range of the concentration mass above the equator is in the order of O( r √ d ), also within an annulus of constant width and radius √ d -1. Figs.2 & 4 in main paper illustrates the geometric properties of the ideal sampling space x S", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 10 .10Figure 10. Visualization of reparameterizing the diffusion process in terms of ϕ and v ϕ . We highlight the scenario where SNR is equal to zero in orange.", "figure_data": "", "figure_id": "fig_11", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "where xt 00is parameterised by xt- √ ᾱtεt √ ᾱt", "figure_data": "", "figure_id": "fig_12", "figure_label": "0", "figure_type": "figure" }, { "figure_caption": "Algorithm 1 :1DDIM Sampling with OMS Require: Pre-trained Diffusion Pipeline with a model θ to perform ϵ-prediction. Require: One More Step module ψ(•) Input: OMS Text Prompt C ψ , OMS CFG weight ω ψ Input: Text Prompt C θ , Guidance weight ω θ , Eta σ # Introduce One More Step z ∼ N (0, I) ; x S T ∼ N (0, I); # Classifier Free Guidance at One More Step Phase if ω ψ > 1 then xS 0 = -ψ cfg (x S T , C ψ , ∅, ω ψ ) ; else xS 0 = -ψ(x S T , C ψ ) ; end xT T = ᾱT T xS 0 + 1 -ᾱT T -σ 2 x S T + σz ; # Sampling from Pre-trained Diffusion Model for t = T, . . . , 1 do z", "figure_data": "", "figure_id": "fig_13", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 11. The offset in compression and reconstruction of different series of VAEs.", "figure_data": "", "figure_id": "fig_14", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "(a) Close-up portrait of a man wearing suit posing in a dark studio, rim lighting, teal hue, octane, unreal (b) A starry sky", "figure_data": "", "figure_id": "fig_15", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 12 .12Figure 12. Examples of distortion due to incompatible VAEs. Use the OMS model trained on SD1.5 VAE to forcibly conduct inference on SDXL base model. The upper layer of each subfigure shows the results sampled using the original model, while the lower layer shows the results of inference using the biased OMS model.", "figure_data": "", "figure_id": "fig_16", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "and LCM Fig.18with OMS. In each subfigure, top row are the images directly sampled from raw pre-trained model, while bottom row", "figure_data": "", "figure_id": "fig_17", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "(a) close-up photography of old man standing in the rain at night, in a street lit by lamps, leica 35mm summilux, SDXL with LCM-LoRA, LCM Scheduler with 4 Steps. CFG weight is 1 (no CFG). Mean value is 0.24. (b) close-up photography of old man standing in the rain at night, in a street lit by lamps, leica 35mm summilux, SDXL with LCM-LoRA, LCM Scheduler with 4 + 1 (OMS) Steps. Base model CFG is 1 and OMS CFG is 2. Mean value is 0.14.", "figure_data": "", "figure_id": "fig_18", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 13 .13Figure 13. LCM-LoRA on SDXL for the reproduced result.", "figure_data": "", "figure_id": "fig_19", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "(a) portrait of a woman standing , willow branches, masterpiece, best quality, traditional chinese ink painting, modelshoot style, peaceful, smile, looking at viewer, wearing long hanfu, song, willow tree in background, wuchangshuo, high contrast, in dark, black (b) The moon and the waterfalls, night, traditional chinese ink painting, modelshoot style, masterpiece, high contrast, in dark, black", "figure_data": "", "figure_id": "fig_20", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 14 .14Figure 14. Examples of SD1.5, Community Base Model GhostMix and LoRA MoXin with OMS leading to darker images.", "figure_data": "", "figure_id": "fig_21", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "(a) portrait of a woman standing , willow branches, masterpiece, best quality, traditional chinese ink painting, modelshoot style, peaceful, smile, looking at viewer, wearing long hanfu, song, willow tree in background, wuchangshuo, high contrast, in sunshine, white (b) (masterpiece, top quality, best quality, official art, beautiful and aesthetic:1.2), (1girl), extreme detailed,(fractal art:1.3),colorful,highest detailed, high contrast, in sunshine, white", "figure_data": "", "figure_id": "fig_22", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 15 .15Figure 15. Examples of SD1.5, Community Base Model GhostMix and LoRA MoXin with OMS leading to brighter images.", "figure_data": "", "figure_id": "fig_23", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Figure 16. Additional Samples from SD1.5, top row from original model and bottom row with OMS.", "figure_data": "", "figure_id": "fig_24", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 17. Additional Samples from SD2.1, top row from original model and bottom row with OMS.", "figure_data": "", "figure_id": "fig_25", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 18. Additional Samples from LCM, top row from original model and bottom row with OMS.", "figure_data": "", "figure_id": "fig_26", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "LDMs were conducted both in the unit variance latent space (4*64*64) and pixel space (3*256*256) while others are conducted in pixel space.", "figure_data": "ScheduleSNR(T )r Tr S∆rcosine2.428e-09 443.404205 443.404235 3.0518e-05linear4.036e-05 443.393676 443.399688 6.0119e-03LDM Pixels4.682e-03 442.713593 443.402527 6.8893e-01LDM Latents † 4.682e-03 127.962364 127.996811 3.4447e-02Table", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Quantitative evaluation. All models use DDIM sampler with 50 steps, guidance weight ω", "figure_data": "ModelFID ↓ CLIP Score↑ ImageReward↑ PickScore↑ Precision↑ Recall↑ Density↑ Coverage↑ Wasserstein↓SD1.5RAW 12.52 OMS 14.740.2641 0.26450.1991 0.228921.49 21.550.60 0.640.55 0.460.56 0.640.54 0.5722.47 7.84SD2.1RAW 14.10 OMS 15.720.2624 0.26280.4501 0.456521.80 21.820.58 0.610.55 0.480.52 0.580.50 0.5421.63 7.70SD XLRAW 13.14 OMS 13.290.2669 0.26790.8246 0.873022.51 22.520.64 0.650.52 0.490.67 0.700.63 0.6411.08 7.25", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Model scaling variants of OMS.", "figure_data": "ModelOMS-SOMS-BOMS-LLayer num.222Transformer blocks 111Channels[32, 64, 64][160, 320, 640][320, 640, 1280, 1280]Attention heads[2, 4, 4]8[5, 10, 20, 20]Cross Attn dim.768/1024/4096768/1024/4096768/1024/4096# of OMS params3.3M/3.7M/8.1M 151M/154M/187M 831M/838M/915M", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Experiment results among different OMS scales and text encoders on pre-trained SD2.1.", "figure_data": "The batch size and learning rate areadjusted based on the model scale, text encoder, and pre-trained model, as detailed in Tab. 5. Notably, our observa-tions indicate that our model consistently converges withina relatively low number of iterations, typically around 2,000iterations being sufficient.Hardware and speed All our models were trained usingeight 80G A800 units, and the training speeds are providedin Tab. 5. It is evident that our model was trained with high", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Distinct hyper-parameters and training speed on different model. All models are trained for 2k iterations using 8 80G A800.", "figure_data": "", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" } ]
Minghui Hu; Jianbin Zheng; Chuanxia Zheng; Chaoyue Wang; Dacheng Tao; Tat-Jen Cham
[ { "authors": "Yogesh Balaji; Seungjun Nah; Xun Huang; Arash Vahdat; Jiaming Song; Karsten Kreis; Miika Aittala; Timo Aila; Samuli Laine; Bryan Catanzaro", "journal": "", "ref_id": "b0", "title": "eDiff-I: Text-to-image diffusion models with an ensemble of expert denoisers", "year": "2022" }, { "authors": "Avrim Blum; John Hopcroft; Ravindran Kannan", "journal": "Cambridge University Press", "ref_id": "b1", "title": "Foundations of data science", "year": "2020" }, { "authors": "Ting Chen", "journal": "", "ref_id": "b2", "title": "On the importance of noise scheduling for diffusion models", "year": "2023" }, { "authors": "Prafulla Dhariwal; Alexander Nichol", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b3", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "Rohit Girdhar; Mannat Singh; Andrew Brown; Quentin Duval; Samaneh Azadi; Sai Saketh Rambhatla; Akbar Shah; Xi Yin; Devi Parikh; Ishan Misra", "journal": "", "ref_id": "b4", "title": "Emu video: Factorizing text-to-video generation by explicit image conditioning", "year": "2023" }, { "authors": "Alexandros Graikos; Nikolay Malkin; Nebojsa Jojic; Dimitris Samaras", "journal": "", "ref_id": "b5", "title": "Diffusion models as plug-and-play priors", "year": "2022" }, { "authors": "Nicholas Guttenberg; Crosslabs ", "journal": "", "ref_id": "b6", "title": "Diffusion with offset noise", "year": "" }, { "authors": "Jonathan Ho; Tim Salimans", "journal": "", "ref_id": "b7", "title": "Classifier-free diffusion guidance", "year": "2022" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b8", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Emiel Hoogeboom; Jonathan Heek; Tim Salimans", "journal": "", "ref_id": "b9", "title": "simple diffusion: End-to-end diffusion for high resolution images", "year": "2023" }, { "authors": "J Edward; Yelong Hu; Phillip Shen; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen", "journal": "", "ref_id": "b10", "title": "Lora: Low-rank adaptation of large language models", "year": "2021" }, { "authors": "Minghui Hu; Yujie Wang; Tat-Jen Cham; Jianfei Yang; Ponnuthurai N Suganthan", "journal": "", "ref_id": "b11", "title": "Global context with discrete diffusion in vector quantised modelling for image generation", "year": "2022" }, { "authors": "Minghui Hu; Chuanxia Zheng; Heliang Zheng; Tat-Jen Cham; Chaoyue Wang; Zuopeng Yang; Dacheng Tao; Ponnuthurai N Suganthan", "journal": "", "ref_id": "b12", "title": "Unified discrete diffusion for simultaneous vision-language generation", "year": "2022" }, { "authors": "Minghui Hu; Jianbin Zheng; Daqing Liu; Chuanxia Zheng; Chaoyue Wang; Dacheng Tao; Tat-Jen Cham", "journal": "", "ref_id": "b13", "title": "Cocktail: Mixing multi-modality controls for text-conditional image generation", "year": "2023" }, { "authors": "Tero Karras; Miika Aittala; Timo Aila; Samuli Laine", "journal": "", "ref_id": "b14", "title": "Elucidating the design space of diffusion-based generative models", "year": "2022" }, { "authors": "Yuval Kirstain; Adam Polyak; Uriel Singer; Shahbuland Matiana; Joe Penna; Omer Levy", "journal": "", "ref_id": "b15", "title": "Pick-a-pic: An open dataset of user preferences for text-to-image generation", "year": "2023" }, { "authors": "Tuomas Kynkäänniemi; Tero Karras; Samuli Laine; Jaakko Lehtinen; Timo Aila", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b16", "title": "Improved precision and recall metric for assessing generative models", "year": "2019" }, { "authors": "Shanchuan Lin; Bingchen Liu; Jiashi Li; Xiao Yang", "journal": "", "ref_id": "b17", "title": "Common diffusion noise schedules and sample steps are flawed", "year": "2023" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "Springer", "ref_id": "b18", "title": "Microsoft COCO: Common objects in context", "year": "2014" }, { "authors": "Luping Liu; Yi Ren; Zhijie Lin; Zhou Zhao", "journal": "", "ref_id": "b19", "title": "Pseudo numerical methods for diffusion models on manifolds", "year": "2022" }, { "authors": "Cheng Lu; Yuhao Zhou; Fan Bao; Jianfei Chen; Chongxuan Li; Jun Zhu", "journal": "", "ref_id": "b20", "title": "Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps", "year": "2022" }, { "authors": "Simian Luo; Yiqin Tan; Longbo Huang; Jian Li; Hang Zhao", "journal": "", "ref_id": "b21", "title": "Latent consistency models: Synthesizing highresolution images with few-step inference", "year": "2023" }, { "authors": "Simian Luo; Yiqin Tan; Suraj Patil; Daniel Gu; Apolinário Patrick Von Platen; Longbo Passos; Jian Huang; Hang Li; Zhao", "journal": "", "ref_id": "b22", "title": "LCM-LoRA: A universal stable-diffusion acceleration module", "year": "2023" }, { "authors": "Muhammad Ferjad Naeem; Seong Joon Oh; Youngjung Uh; Yunjey Choi; Jaejun Yoo", "journal": "PMLR", "ref_id": "b23", "title": "Reliable fidelity and diversity metrics for generative models", "year": "2020" }, { "authors": "Alex Nichol; Prafulla Dhariwal; Aditya Ramesh; Pranav Shyam; Pamela Mishkin; Bob Mcgrew; Ilya Sutskever; Mark Chen", "journal": "", "ref_id": "b24", "title": "Glide: Towards photorealistic image generation and editing with text-guided diffusion models", "year": "2021" }, { "authors": "Alexander Quinn; Nichol ; Prafulla Dhariwal", "journal": "PMLR", "ref_id": "b25", "title": "Improved denoising diffusion probabilistic models", "year": "2021" }, { "authors": "Dustin Podell; Zion English; Kyle Lacey; Andreas Blattmann; Tim Dockhorn; Jonas Müller; Joe Penna; Robin Rombach", "journal": "", "ref_id": "b26", "title": "SDXL: improving latent diffusion models for high-resolution image synthesis", "year": "2023" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b27", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b28", "title": "Hierarchical text-conditional image generation with clip latents", "year": "2022" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b29", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Olaf Ronneberger; Philipp Fischer; Thomas Brox", "journal": "Springer", "ref_id": "b30", "title": "Unet: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "Yossi Rubner; Carlo Tomasi; Leonidas J Guibas", "journal": "International journal of computer vision", "ref_id": "b31", "title": "The earth mover's distance as a metric for image retrieval", "year": "2000" }, { "authors": "Tim Salimans; Jonathan Ho", "journal": "", "ref_id": "b32", "title": "Progressive distillation for fast sampling of diffusion models", "year": "2022" }, { "authors": "Christoph Schuhmann; Romain Beaumont; Richard Vencu; Cade Gordon; Ross Wightman; Mehdi Cherti; Theo Coombes; Aarush Katta; Clayton Mullis; Mitchell Wortsman", "journal": "Advances in Neural Information Systems", "ref_id": "b33", "title": "Laion-5b: An open large-scale dataset for training next generation image-text models", "year": "2022" }, { "authors": "Jascha Sohl-Dickstein; Eric Weiss; Niru Maheswaranathan; Surya Ganguli", "journal": "PMLR", "ref_id": "b34", "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "year": "2015" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "", "ref_id": "b35", "title": "Denoising diffusion implicit models", "year": "2020" }, { "authors": "Yang Song; Jascha Sohl-Dickstein; P Diederik; Abhishek Kingma; Stefano Kumar; Ben Ermon; Poole", "journal": "", "ref_id": "b36", "title": "Score-based generative modeling through stochastic differential equations", "year": "2020" }, { "authors": "Jiazheng Xu; Xiao Liu; Yuchen Wu; Yuxuan Tong; Qinkai Li; Ming Ding; Jie Tang; Yuxiao Dong", "journal": "", "ref_id": "b37", "title": "Imagereward: Learning and evaluating human preferences for textto-image generation", "year": "2023" }, { "authors": "Lvmin Zhang; Anyi Rao; Maneesh Agrawala", "journal": "", "ref_id": "b38", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" }, { "authors": "Min Zhao; Fan Bao; Chongxuan Li; Jun Zhu", "journal": "", "ref_id": "b39", "title": "Egsde: Unpaired image-to-image translation via energyguided stochastic differential equations", "year": "2022" }, { "authors": "Ye Zhu; Yu Wu; Zhiwei Deng; Olga Russakovsky; Yan Yan", "journal": "", "ref_id": "b40", "title": "Boundary guided mixing trajectory for semantic control with diffusion models", "year": "2023" } ]
[ { "formula_coordinates": [ 2, 365.69, 596.45, 179.42, 30.2 ], "formula_id": "formula_0", "formula_text": "q(x 1:T |x 0 ) := T t=1 q(x t |x t-1 ),(1)" }, { "formula_coordinates": [ 2, 338.88, 642.58, 206.24, 9.68 ], "formula_id": "formula_1", "formula_text": "q(x t |x t-1 ) := N x t ; 1 -β t x t-1 , β t I .(2)" }, { "formula_coordinates": [ 2, 348.07, 696.6, 197.05, 17.25 ], "formula_id": "formula_2", "formula_text": "q(x t |x 0 ) := N (x t ; √ ᾱt x 0 , (1 -ᾱt )I),(3)" }, { "formula_coordinates": [ 3, 120.03, 123.54, 96.42, 8.96 ], "formula_id": "formula_3", "formula_text": "SNR(t) = ᾱt /(1 -ᾱt )." }, { "formula_coordinates": [ 3, 98.97, 178.29, 183.52, 10.65 ], "formula_id": "formula_4", "formula_text": "p θ (x t-1 |x t ) := N (x t-1 ; μt , σ2 t I). (5" }, { "formula_coordinates": [ 3, 87.38, 178.64, 198.98, 40.05 ], "formula_id": "formula_5", "formula_text": ") μt := √ ᾱt-1 β t 1 -ᾱt x 0 + √ α t (1 -ᾱt-1 ) 1 -ᾱt x t(6)" }, { "formula_coordinates": [ 3, 94.55, 266.16, 191.81, 17.63 ], "formula_id": "formula_6", "formula_text": "x0 := (x t - √ 1 -ᾱt ϵ θ (x t , t))/ √ ᾱt (7)" }, { "formula_coordinates": [ 3, 113.92, 344.86, 172.44, 17.63 ], "formula_id": "formula_7", "formula_text": "v t := √ ᾱt ϵ - √ 1 -ᾱt x 0 ;(8)" }, { "formula_coordinates": [ 3, 99.92, 387.68, 186.45, 17.63 ], "formula_id": "formula_8", "formula_text": "x0 := √ ᾱt x t - √ 1 -ᾱt v θ (x t , t)(9)" }, { "formula_coordinates": [ 3, 126.03, 506.34, 160.33, 8.99 ], "formula_id": "formula_9", "formula_text": "ϵ ∼ N (0, I + 0.1Σ),(10)" }, { "formula_coordinates": [ 3, 326.87, 138.78, 218.24, 25.51 ], "formula_id": "formula_10", "formula_text": "β t = √ 0.00085 T -t T -1 + √ 0.012 t -1 T -1 2 ,(11)" }, { "formula_coordinates": [ 3, 366.82, 425.54, 178.29, 13.16 ], "formula_id": "formula_11", "formula_text": "x T T = ᾱT T x 0 + 1 -ᾱT T z,(12)" }, { "formula_coordinates": [ 4, 74.66, 162.82, 211.7, 12.69 ], "formula_id": "formula_12", "formula_text": "E(x 2 1 + x 2 2 + • • • + x 2 d ) = dE(x 2 1 ) = dσ 2 ,(13)" }, { "formula_coordinates": [ 4, 148.06, 233.7, 138.3, 17.93 ], "formula_id": "formula_13", "formula_text": "r = σ √ d,(14)" }, { "formula_coordinates": [ 4, 69.42, 692.47, 216.94, 23.23 ], "formula_id": "formula_14", "formula_text": "x T -1 = 1 √ α T x T - 1 -α T √ 1 -ᾱT ϵ θ + σ T z,(15)" }, { "formula_coordinates": [ 4, 321.23, 421.45, 223.88, 29.38 ], "formula_id": "formula_15", "formula_text": "x T -1 = √ α T x T - √ ᾱT -1 (1 -α T ) √ 1 -ᾱT v θ + σ T z,(16)" }, { "formula_coordinates": [ 4, 368.36, 502.41, 176.76, 16.83 ], "formula_id": "formula_16", "formula_text": "x T -1 = - √ ᾱT -1 v θ + σ T z,(17)" }, { "formula_coordinates": [ 4, 372.25, 556.12, 172.86, 16.84 ], "formula_id": "formula_17", "formula_text": "x T -1 = √ ᾱT -1 x 0 + σ T z.(18)" }, { "formula_coordinates": [ 5, 104.15, 152.87, 178.06, 9.65 ], "formula_id": "formula_18", "formula_text": "L T = D KL (q(x T |x 0 )∥p(x T )) . (19" }, { "formula_coordinates": [ 5, 282.21, 153.19, 4.15, 8.64 ], "formula_id": "formula_19", "formula_text": ")" }, { "formula_coordinates": [ 5, 70, 400.15, 212.22, 13.16 ], "formula_id": "formula_20", "formula_text": "xT T = ᾱT T x0 + 1 -ᾱT T -σ 2 T x S T + σ T z, (20" }, { "formula_coordinates": [ 5, 282.21, 402.54, 4.15, 8.64 ], "formula_id": "formula_21", "formula_text": ")" }, { "formula_coordinates": [ 5, 50.11, 651.99, 256.98, 25.29 ], "formula_id": "formula_22", "formula_text": "ψ cfg (x S T , C ψ , ∅, ω ψ ) = ψ(x S T , ∅)+ω ψ ψ(x S T , C ψ ) -ψ(x S T , ∅) , (21)" }, { "formula_coordinates": [ 11, 95.89, 581.03, 186.32, 27.58 ], "formula_id": "formula_23", "formula_text": "A(d) = 2π d/2 Γ(d/2) , V (d) = π d/2 d 2 Γ(d/2) , (22" }, { "formula_coordinates": [ 11, 282.21, 589.67, 4.15, 8.64 ], "formula_id": "formula_24", "formula_text": ")" }, { "formula_coordinates": [ 11, 310.06, 294.28, 235.06, 31.4 ], "formula_id": "formula_25", "formula_text": "x 1 = c √ d-1 is less than 2 c e -c 2 2 ." }, { "formula_coordinates": [ 11, 357.57, 353.53, 184.16, 16.81 ], "formula_id": "formula_26", "formula_text": "√ d -1 -c ≤ r ≤ √ d -1 + c for any c > 0." }, { "formula_coordinates": [ 12, 50.11, 431.81, 240.8, 33.83 ], "formula_id": "formula_27", "formula_text": "v ϕ ≡ dz ϕ dϕ = d cos(ϕ) dϕ x + d sin(ϕ) dϕ ϵ = cos(ϕ)ϵ -sin(ϕ)x.(23)" }, { "formula_coordinates": [ 12, 90.11, 504.05, 196.26, 37.39 ], "formula_id": "formula_28", "formula_text": "sin(ϕ)x = cos(ϕ)ϵ -v ϕ = cos(ϕ) sin(ϕ) (z -cos(ϕ)x) -v ϕ(24)" }, { "formula_coordinates": [ 12, 69.45, 566.78, 216.91, 11.8 ], "formula_id": "formula_29", "formula_text": "sin 2 (ϕ)x = cos(ϕ)z -cos 2 (ϕ)x -sin(ϕ)v ϕ(25)" }, { "formula_coordinates": [ 12, 57.55, 604.11, 228.81, 11.8 ], "formula_id": "formula_30", "formula_text": "(sin 2 (ϕ) + cos 2 (ϕ))x = x = cos(ϕ)z -sin(ϕ)v ϕ ,(26)" }, { "formula_coordinates": [ 12, 86.24, 667.08, 200.12, 9.79 ], "formula_id": "formula_31", "formula_text": "vθ (z ϕ ) ≡ cos(ϕ)ε θ (z ϕ ) -sin(ϕ)x θ (z ϕ ),(27)" }, { "formula_coordinates": [ 12, 317.72, 415.82, 227.39, 69.46 ], "formula_id": "formula_32", "formula_text": "z ϕs = cos(ϕ s )x θ (z ϕt ) + sin(ϕ s )ε θ (z ϕt ) = cos(ϕ s )(cos(ϕ t )z ϕt -sin(ϕ t )v θ (z ϕt ))+ sin(ϕ s )(sin(ϕ t )z ϕt + cos(ϕ t )v θ (z ϕt )) =[cos(ϕ s ) cos(ϕ t )+ sin(ϕ s ) sin(ϕ t )]z ϕt + [sin(ϕ s ) cos(ϕ t ) -cos(ϕ s ) sin(ϕ t )]v θ (z ϕt ).(28)" }, { "formula_coordinates": [ 12, 316.31, 514.19, 228.8, 24.6 ], "formula_id": "formula_33", "formula_text": "cos(ϕ s ) cos(ϕ t ) + sin(ϕ s ) sin(ϕ t ) = cos(ϕ s -ϕ t ) sin(ϕ s ) cos(ϕ t ) -cos(ϕ s ) sin(ϕ t ) = sin(ϕ s -ϕ t ),(29)" }, { "formula_coordinates": [ 12, 322.28, 569.67, 218.68, 9.68 ], "formula_id": "formula_34", "formula_text": "z ϕs = cos(ϕ s -ϕ t )z ϕt + sin(ϕ s -ϕ t )v θ (z ϕt ). (30" }, { "formula_coordinates": [ 12, 540.96, 570.02, 4.15, 8.64 ], "formula_id": "formula_35", "formula_text": ")" }, { "formula_coordinates": [ 12, 352.8, 609.24, 192.31, 9.68 ], "formula_id": "formula_36", "formula_text": "z ϕt-δ = cos(δ)z ϕt -sin(δ)v θ (z ϕt ).(31)" }, { "formula_coordinates": [ 13, 50.65, 431.81, 237.07, 40.06 ], "formula_id": "formula_37", "formula_text": "xt-1 = √ ᾱt-1 xt - √ ᾱt εt √ ᾱt + 1 -ᾱt-1 -σ 2 t εt +σ t z,(32)" }, { "formula_coordinates": [ 13, 128.32, 539.54, 158.04, 12.69 ], "formula_id": "formula_38", "formula_text": "xS 0 := -v ψ (x S T , C).(33)" }, { "formula_coordinates": [ 13, 74.35, 597.7, 212.01, 13.16 ], "formula_id": "formula_39", "formula_text": "xT T = ᾱT T xS 0 + 1 -ᾱT T -σ 2 x S T + σz(34)" }, { "formula_coordinates": [ 13, 320.82, 302.68, 181.75, 262.39 ], "formula_id": "formula_40", "formula_text": "∼ N (0, I) if t > 1, else z = 0; if t=T then if ω θ > 1 then εT = θ cfg (x T T , C θ , ∅, ω θ ) ; else εT = θ(x T t , C θ ) ; end xT -1 = √ ᾱT -1 xT T - √ 1-ᾱT T εT √ ᾱT T + 1 -ᾱT -1 -σ 2 εT + σz ; else if ω θ > 1 then εt = θ cfg (x t , C θ , ∅, ω θ ) ; else εt = θ(x T t , C θ ) ; end xt-1 = √ ᾱt-1 xt- √ 1-ᾱtεt √ ᾱt + 1 -ᾱt-1 -σ 2 εt + σz end end return x0" } ]
2023-12-08
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b1", "b6", "b8", "b9", "b10" ], "table_ref": [], "text": "For a long time, people have hoped for machines to have the ability to \"learn,\" enabling them to better understand the world and interact with humans by learning knowledge. In contrast to machine learning, the goal of machine unlearning is to endow models with the capability to forget knowledge actively, allowing them to \"proactively\" erase certain specific knowledge they have previously learned [1] [2]. In the era of large language models (LLMs), LLMs un-dergo pre-training on massive amounts of text to acquire and store a broad range of world knowledge. This paradigm has demonstrated excellence in various downstream natural language processing tasks [3] [4]. However, LLMs also encode significant amounts of private data, copyrighted content, and biased information [5] [6]. Recent research indicates that large language models merely recall and replicate the training samples they have encountered. Although this knowledge is encoded in parameterized, distributed,and high-dimensional embedding vectors, it can often be triggered in specific situations, potentially impacting user privacy or causing other data security concerns. Similar to traditional knowledge bases, it is necessary to establish knowledge removal mechanisms for LLMs, allowing for the removal of specific knowledge from the model upon user request. This approach is known as LLMs knowledge unlearning. It grants knowledge in LLMs the Right To Be Forgotten. When users request the removal of information related to personal privacy from applications driven by LLMs, the models should provide a reasonable response, complying with the user's demand for the forgetting of privacy data to protect the user's legitimate interests and mitigate the risk of legal action against these applications.\nUnlearning is not a recently emerging issue. In traditional machine learning research, machine unlearning has long been a subject of widespread research interest. It focuses on studying various unlearning methods for models to forget, aiming to enhance the model's security (unlearning toxic data), privacy (unlearning private data), and impartiality (unlearning biased data) [2]. Traditional approaches in machine unlearning can be broadly categorized into two types [7][8]: 1) designing new unlearning algorithms to isolate target data points during training and then retraining the model based on the unlearning algorithm, such as differential privacy (DP) methods [9] [10]. 2) Approximate unlearning, which involves making limited parameter updates to machine learning models to minimize the additional impact of forgetting target data points, reducing it to an acceptable level while simultaneously constraining other model behaviors from undergoing significant changes [11].\nHowever, in the era of LLMs, traditional machine unlearning methods may not necessarily be applicable to LLMs. The potential reasons for this are as follows: 1) The parameter scale of LLMs is extremely large, leading to a high cost of model retraining, especially in the case of frequent requests for continuous unlearning, which is impractical in reality. 2) LLMs are knowledge-intensive and typically used for openended question answering or inference tasks. These tasks are often modeled as generative tasks in the form of (prompt, output). In contrast, previous natural language processing models in machine learning were primarily used for language understanding tasks, with classification tasks like text classification, sentiment analysis, and natural language inference being more common. Unlearning methods designed for these classification tasks are not applicable to generative tasks. 3) Commercialized LLMs generally only provide API access and do not offer a way to access their parameters. These factors have impacted the development of forgetting mechanisms in the era of LLMs, leading to the emergence of LLM knowledge unlearning tailored for these large generative models. Knowledge unlearning process of LLMs is illustrated in Figure 1.\nIn current scenario where resources for training and maintaining LLMs are highly constrained, knowledge unlearning for LLMs proves to be exceptionally practical. It stands as a necessary approach for developing responsible, legally compliant, and user-trusted LLMs. To propel the advancement of this field, this paper investigates existing research related to knowledge unlearning for LLMs, with a primary focus on the problems, methods, and future directions. To the best of our knowledge, this paper is one of the early works in researching this issue. The primary contributions of this paper are as follows:\n• Building upon research on machine unlearning, we introduce for the first time the concept of knowledge unlearning for LLMs. We analyze its differences and connections with machine unlearning.\n• We conduct a comprehensive literature review, and categorize existing methods for knowledge unlearning in LLMs, including methods based on parameter optimization, parameter merging, and in-context learning. Detailed introduction of the principles and characteristics of each method are then provided, as well as the datasets and tasks used in evaluation.\n• Based on an in-depth analysis of challenges and demands in this field, we unveil future research directions of knowledge unlearning in LLMs.\nThe rest of this survey is illustrated in Figure 2. Section 2 defines the problem of knowledge unlearning, comparing it with machine unlearning and model editing. Section 3 introduces knowledge unlearning methods for LLMs, categorizing them into three types: methods based on parameter optimization, parameter merging, and in-context learning. Section 4 presents relevant datasets and evaluations. Section 5 summarizes the work of this paper and discusses future directions." }, { "figure_ref": [], "heading": "Task Description", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Problem Definition", "publication_ref": [ "b11" ], "table_ref": [], "text": "LLMs knowledge unlearning aims to enable the model to forget knowledge associated with certain concepts in the training data, while preserving other unrelated knowledges of the model unaffected. Assuming the original model is F (.; θ), where θdenotes the model parameters. The training set is D t = {(x, y)} where x and y are the input text and its label. The forgetting set is D f = x f , y f which contains the samples to be unlearned by F (•; θ). The retention set is denoted as D r = D -D f = {(x r , y r )} which denotes the data to be retained after unlearning process. The goal of knowledge unlearning is to build an unlearned model F (•; θ ′ ) that satisfies the following requirements [12]:\n1) Effectiveness. The output of the unlearned model F (•; θ ′ ) on forgetting set D f should be significantly distinct from that of the original model F (•; θ): The first goal ensures the ability to successfully unlearn the knowledge in the forgetting set, while the second one enables the unlearning process should not affect other unrelated knowledges. In addition, there are further objectives, such as generalization, serialized unlearning, and large-scale unlearning. In this survey, we only define general evaluation metrics that satisfy the fundamental requirements for forgetting.\nmax θ d (F (D f ; θ) ; F (D f ; θ ′ ))" }, { "figure_ref": [ "fig_0" ], "heading": "Potential Applications", "publication_ref": [ "b12" ], "table_ref": [], "text": "Knowledge unlearning technology expands the application scenarios of LLMs, making them more trustworthy to ordinary users and further reducing the risk of misuse. In the future, knowledge unlearning technology will have many promising applications.\nFor eliminating toxic data, dirty data, biased information, and other harmful elements stored in the model and aligning them with human values, knowledge unlearning has an advantage over reinforcement learning from human feedback (RLHF) methods. While RLHF can also achieve these goals, it differs significantly from knowledge unlearning. RLHF [13][14] is a method for adjusting how the model answers questions, aiming to align the model's output with human values using preference data collected from user feedback as training data. Therefore, RLHF requires positive labeled samples like (input, positive label) to reflect human preferences. As known, ChatGPT's RLHF step collects a large amount of user feedback through crowdsourcing, using this feedback data with human preference as labeled samples for aligning the model during training. This process requires significant cost consumption. On the other hand, using knowledge unlearning to directly delete relevant knowledge from the model is easier than teaching the model new preferred knowledge because it eliminates the need for positive sample training and has better timeliness.\nIn addition, knowledge unlearning can be applied to scenarios such as privacy information and copy-Figure 2: Structure of this survey right content protection. It mandates applications driven by large language models to delete stored personal information, contact details, and online comment records, etc. For many online text contents that are available but not formally published, assuming they have been used as training data for LLMs. When the language model generates content, it automatically recalls and applies these copyrightprotected contents by the authors. Consequently, the generated content is highly likely to infringe on the author's copyright. In cases where authors make relevant forgetting requests, LLMs need to respond to these requests and compliantly delete the encoded content. At this point, knowledge unlearning will play a crucial role in efficiently removing the stored copyright-protected content from the model." }, { "figure_ref": [], "heading": "Comparisons to Related Researches", "publication_ref": [], "table_ref": [], "text": "Knowledge unlearning in LLMs involves calibrate the internal knowledge of the model, removing specific information to better align with the real world. Similar research includes machine unlearning and model editing, both of which efficiently update knowledge within models to align it with the real world. This section details the distinctions between these approaches." }, { "figure_ref": [], "heading": "Relationship with Machine Unlearning", "publication_ref": [], "table_ref": [], "text": "Knowledge unlearning in LLMs stems from traditional machine unlearning, as the current transformer architecture-based language model are, in essence, machine learning models. Their goals are consistent, aiming to remove specific knowledge from the model. However, large language models differ significantly from typical machine learning models, both in terms of parameter scale and the richness of internal knowledge. 1) In terms of parameter scale, traditional machine unlearning methods are highly inefficient for knowledge unlearning in LLMs. 2) Regarding the model's application domains, classification tasks are a focus in traditional machine learning field, lead-ing to more research on machine unlearning methods geared towards classification models (e.g., image classification, text classification, and sentiment analysis). However, for generative LLMs like ChatGPT and GPT-4 which produce answers for a wide range of language understanding and generation tasks, knowledge unlearning tailored them is indispensable. Unfortunately, traditional machine unlearning methods, have rarely addressed these scenarios. " }, { "figure_ref": [], "heading": "Current Methods", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "In this section, we introduce current LLMs knowledge unlearning methods, classifying them into three categories: those based on parameter optimization, parameter merging, and in-context learning, as illustrated in Table 1. Subsequent subsections will offer detailed explanations of these methods." }, { "figure_ref": [ "fig_1" ], "heading": "Parameter Optimization", "publication_ref": [ "b11", "b17", "b18", "b11", "b19", "b20", "b21", "b22", "b23", "b24", "b25" ], "table_ref": [], "text": "Methods based on parameter optimization are the most straightforward approach to knowledge unlearn-ing. Within certain constraints, these methods efficiently fine-tune specific parameters of the model to selectively modify particular behaviors while avoiding any detrimental impact on other aspects of the model. Among these methods, the one frequently used is the reverse gradient method, which entails updating partial parameters of the model in the opposite direction of the gradient descent for the samples earmarked for forgetting [19][20]. Additionally, it is possible to introduce new parameters at intermediate positions in the model and train them to actively \"remember\" the samples slated for forgetting. This approach negates the necessity of modifying the model's inherent parameters, thereby preventing disruption to its original knowledge [12]. KGA (ACL 2023) This method proposes a general forgetting framework based on knowledge gap alignment (KGA), which is applicable to language models of different scales. The knowledge gap refers to the difference in output distributions between two models with identical structures but distinct training data, when given the same input. In this way, these models would produce similar predictions under seen and unseen data.\nAssuming that D t and D f are training and forgetting set respectively. D n has the same distribution as D t and satisfies D t ∩ D n = ∅.. Initially, KGA trains three models on the above three datasets, denote as F Dt , F D f and F Dn . Then, F Dn is used to initialize the parameters of the target model F * (•). The training objectives can be divided into two parts:\n1) For dataset D n , KL distance between the output distributions of F * (•) and F * (D f ) is denoted as KL 1 . For forgetting set D f , KL distance between the prediction distributions of F * (•) and F D f is denoted as KL 2 . In other words: KL\n1 :D n → (F Dt , F Dn ), KL 2 :D f → (F * , F D f )\n. By minimizing the distance (knowledge gap) between KL 2 and KL 1 , i.e., min F * |KL 2 -KL 1 |, the goal is to ensure that the performance of the unlearned model F * (•) on forgetting set D f is as if it has never seen that data.\n2) For retention set D r , the output distributions of F * (•) and F (D t ) should be similar. The KL distance between them is referred to as KL 3 , i.e., minimizing the distribution difference min F * KL 3 .\nIn the end, the overall loss is a weighted sum of Parameter optimization KGA [18] With the knowledge gap as the minimization objective, it fine-tunes the parameters of the target model while maintaining its performance on the retaining set.\nDistilBERT: Text classification T-based Encoder-decoder, BART: Generation KUL [19] Gradient ascent method GPT-NEO-125M/1.3B/2.7B, OPT: Classification, Q&A EUL [12] An unlearning layer is inserted after the FFN layer of transformer module. the model parameters are frozen to enable only the unlearning layer to be learned. An offline fusion method for composite multiple unlearning layers is employed.\nT5-base/3B: Classification, Generation LLMU [20] Gradient ascent method OPT-1.3B/-2.7B, LLaMA2-7B: Q&A, Generation DEPN [21] Locate the privacy-related neurons and directly modify their activation.\nBERT-base: Classification AU [22] Reverse loss and token replacement is used. Llama-7b-hf-chat, Phi-1.5: Generation Parameter merging TV [23] Arithmetical operation is used between task vector CLIP: Image classification GPT-2-Samll/Medium/Large: Classification CPEM [24] Addition and subtraction operators are used on PEM (such as LoRA), where subtraction can achieve forgetting. GPT-2-Large: Classification\nIn-context learning ICUL [25] Performing few-shot in-context learning using both forgotten and normal samples as examples.\nBloom-560M/1.1B: Text classification loss a and loss r . From the perspective loss function, KGA only constrains the model's prediction distribution without any requirements on its structure, making it a model-agnostic forgetting approach. However, it also has significant drawbacks in that it requires fine-tuning the entire model's parameters, making it challenging to effectively apply for large models. KUL (ACL 2023) KUL utilizes the gradient ascent method to maximize the loss function, in order to shift the model's predictions for a particular sample in the opposite direction. In this process, a small number of model parameters are updated to forget the unlearning sample. Given a sequence x = (x 1 , x 2 , ..., x n ) for a language modeling task, the goal of knowledge unlearning is to maximize the negative log-likelihood of this sequence:\nloss(f θ , x) = - n i=1 log (p θ (x i |x 1,2,...,i-1 ))\nwhere the conditional probability p θ (x i |x 1,2....,i-1 ) denotes the probability that the model f θ predicts the next word x i given the word sequence (x 1,2....,i-1 ).\nEUL (EMNLP 2023) EUL proposes to insert an additional unlearning layer after the feedforward network (FFN) in transformer block (see Figure 3).By training this unlearning layer on forgetting set, it is able to remember knowledges that need to be forgotten. The designed loss function includes unlearning loss, task-specific loss(such as classification or generation tasks), and language modeling loss to constrain the model's performance on both the forgetting set and the retention set. During training, the parameters of the model itself are frozen, and only the parameters of the unlearning layer are trained to maintain the behavior of the model unchanged. Each unlearning request will learn a corresponding unlearning layer, enabling a serialized unlearning mechanism. Finally, through offline fusion of multiple sequentially trained unlearning layers, the number of unlearning layers can be reduced while maintaining the acquired forgetting capabilities. DEPN (arXiv 2023.10) DEPN applies knowledge unlearning method to carefully remove stored privacy information in LLMs. Specifically, for the encoded privacy information such as user names, ID numbers, and contact information in language model, DEPN aims to remove them from the model with low cost. Inspired by the knowledge neurons discovered in model editing, this paper assumes that there is a strong correlation between privacy information and specific neurons in the model, which can be called privacy neurons. If these neurons can be located and the privacy information expression within them can be edited, it is possible to make them forget these privacy-related knowledge. Therefore, the authors use the integrated gradient method [26] to locate the privacy neurons with respect to specific labels, and then modify their activation by setting them to zero to edit the expression of privacy neurons, enabling them to forget the encoded privacy information.\nLLMU (arXiv 2023.10) Similar to research on model editing, LLMU proposes that knowledge unlearning should satisfy four objectives: effectiveness, generalization, utility, and low cost. To achieve the first three goals, loss functions are designed to constrain the training process. For example, for effectiveness, a gradient ascent method is used as the forgetting loss loss f orget . The parameter update step is designed as follows:\nθ t+1 = θ t -ϵ 1 • ∇ θt loss f orget -ϵ 2 • ∇ θt loss mismatch -ϵ 3 • ∇ θt loss maintain\nAmong above loss functions, there are forgetting loss loss f orget , mismatch loss loss mismatch , and retention loss loss maintain .\nAU (arXiv 2023.10) This method focuses on generative language models, such as LLaMa-2, and investigates knowledge unlearning in their answer generations. The author argues that simply using a loss function to penalize the probability of predicting the next word can, in some cases, result in the model losing its language understanding ability as a whole. For example, if we make LLaMa-2 forget knowledge related to \"Harry Potter,\" the following two examples are both related to \"Harry Potter,\" but they reflect different capabilities of the model. The first example examines the knowledge about \"Harry Potter\" that the model acquired during training and its ability to apply that knowledge, while the second example assesses the model's language understanding ability.\nHarry Potter went up to him and said, \"Hello. My name is Harry Potter's two best friends are In this scenario, knowledge unlearning requires the model to lose knowledge related to \"Harry Potter\" (i.e., being unable to answer question 1), while still retaining language understanding capabilities to answer general questions, even if they may contain the term \"Harry Potter\" (i.e., correctly answering question 2). Therefore, simply using reverse loss on predicting the next word is insufficient to achieve both of these goals.\nTo align the model's responses to \"Harry Potter\" related questions with a model that has never seen related training data, AU trains an augmented model on the to-be-forgotten data (such as Harry Potter data). This augmented model is designed to identify tokens most relevant to the samples intended for forgetting and compare its logits with those of the base model. Subsequently, the model's representation is replaced with a more generic prediction.\nv generic : = v baseline -αReLU(v reinf orced -v baseline )\nThe underlying assumption is that the reinforced model, having undergone additional training on the target data, provides predictions for the next token that are more relevant to the theme, i.e., the probability of predicting theme-related words in ν reinf orced is maximized. The difference between ν reinf orced and ν baseline is used as the optimization direction for ν baseline to further train the base model. This process aims to shift the predicted logits of the base model away from theme-related words that were previously assigned low probabilities. In certain circumstances, such as when ν reinf orced is similar to ν baseline , the paper also proposes an alternative word replacement method to alter the model's output." }, { "figure_ref": [ "fig_2" ], "heading": "Parameter Merging", "publication_ref": [ "b26", "b27" ], "table_ref": [], "text": "Differing from the methods based on parameter optimization, methods based on parameter merging merely involves the offline composition of previously trained model parameters (e.g., via arithmetic operations like addition and subtraction) without requiring additional parameter training. This process also allows for the removal of specific knowledge from the model while maintaining the stability of other model behaviors. In scenarios where the model has already been deployed, this method proves to be practical, offering a simple and convenient means of implementing knowledge unlearning.\nTV (arXiv 2022.12) This paper introduces the concept of a task vector, which, through arithmetic operations like negation or addition between task vectors, can selectively modify the model's output with minimal impact on other model behaviors. Assuming the weights of the original pretrained model are denoted as θ pre and the weights of the model finetuned for the target task are denoted as , the task vector τ is obtained by subtracting the two (i.e., τ = -θ f t -θ pre ), as illustrated on the left side of Figure 4. This task vector τ represents the parameter change vector after fine-tuning the model for downstream tasks. Taking the negation of the task vector, -τ , enables the language model to forget related knowledge while exerting minimal influence on other aspects of the model, as depicted on the right side of Figure 4.\nCPEM (arXiv 2023.06) This paper primarily addresses the parameter-efficient modules (PEM) for LLMs, such as LoRA [27] and (IA)3 [28]. It employs arithmetic operations, including addition and subtraction, on multiple modules to alter the representation of knowledge within these modules. Two basic operators are defined: the addition operator and the negation operator. The negation operator, in particular, facilitates the forgetting of knowledge stored in the adapter modules, providing a means to Assuming the parameters of the pretrained model are denoted as θ, where the addition operation is defined as ⊕ and the negation operation as ⊖. For instance, for LoRA module, the computation for the intermediate layer weight matrix w is expressed as w ′ = w +BA, where B and A represent the two lowrank matrices in LoRA. Correspondingly, the activation values for this portion are x ′ = W • x + BA • x . To induce the LoRA module to forget certain knowledge, it suffices to take the negation of the activation change part, i.e., -BA • x . The corresponding arithmetic operation is then:\n⊖θ negation lora = ⊖θ lora = {A, -B}\nThe above operation results in the reversal of activation values. The principle of this method is similar to gradient ascent, inducing a change in the intermediate layer's activation values in the direction opposite to gradient descent. This mechanism facilitates the unlearning of knowledge within the module." }, { "figure_ref": [], "heading": "In-context learning", "publication_ref": [], "table_ref": [], "text": "Methods based on in-context learning differs from the previous two methods by not focusing on parameter operations. Instead, it views the model as a black box and utilizes its inherent in-context learning capability to supplement existing knowledge during inference. Despite consuming fewer resources and being relatively easy to execute, in-context learning methods modify the model's output results in an assisted manner with external knowledge during the questionand-answer process. As a result, it does not fundamentally erase harmful knowledge stored internally in the model.\nICUL (arXiv 2023.10) This paper introduces the novel in-context learning based unlearning method, applicable in scenarios where access is limited to the model's API without visibility into its internal parameters. In the context of inference using input prompts, ICUL leverages unlearning samples like input f and their corresponding answers as prompt examples. These examples aim to rectify outputs through prompt-based learning, relying on a limited set of samples. Specifically, for an unlearning sample, input f , the following three steps are undertaken:\n1) Flip the label of unlearning sample. Flip the label of the unlearning sample input f to label f , obtaining the flipped sample (input f , label f ).\n2) Prompt Examples. The s normal samples combined with the above unlearning sample is used to form the final prompt.\n(input f , label f ) \\ n (input 1 , label 1 ) \\n (input 2 , label 2 ) \\n...\\ n (input s , label s )\n3) Inference. The above demonstrations are used with the query input q to form the final prompt:\n(input f , label f ) (input 1 , label 1 ) (input 2 , label 2 ) . . . (input s , label s ) input q : While ICUL abstains from the need for modifying the model's parameters and treats the model as a black box, its limitation stems from reliance on the model's capability of in-context learning. If the model demonstrates a weak aptitude for in-context learning, its ability to glean insights from the provided examples and adapt its output is consequently diminished. In addition, when there exists a considerable semantic gap between input q and input f , as in the case of multi-hop question answering, the effectiveness of this method in unlearning may not be optimal." }, { "figure_ref": [], "heading": "Dataset & Evaluation", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "This section introduces the datasets and their usage in evaluating the unlearning effects of LLMs. From the perspective of datasets and tasks, we categorize these datasets into classification and generation datasets. Details of these datasets are listed in Table 2." }, { "figure_ref": [], "heading": "Classification Datasets", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "IMDB Dataset", "publication_ref": [], "table_ref": [], "text": "The IMDB dataset is a sentiment classification dataset that includes user reviews of movies, directors, and actors, etc. It is divided into two sentiment categories. Both the training and testing sets contain 25,000 samples each. During testing, the particular movie or person names are randomly selected, and the trained model is tasked with forgetting all comments related to these specific movies or person from the training set.\nEnron Dataset Enron derives from the emails and related documents of the Enron corporation, comprising approximately 500,000 emails involving communication among thousands of employees. This dataset has widespread applications in research areas such as fraud detection, social network analysis, and privacy protection. DEPN sampled 25,000 examples from the dataset of 500,000 for evaluating privacy information unlearning.\nCivil Comments Dataset Civil Comments consists of public comments from news websites. In the complete dataset, the training/dev/testing sets are comprised of 1,804,874/9,732/97,320 samples, respectively. Each sample (comment) is labeled with a toxicity tag indicating the degree of toxicity in the comment. When using this dataset, CPEM and TA select a subset of samples with a high toxicity score of 0.8 as the training set to obtain a model targeting toxicity. During testing, a proposed unlearning method is employed to mitigate the toxicity of the post-training model and assess the normalized language expression of the detoxified model. " }, { "figure_ref": [], "heading": "Generation Datasets", "publication_ref": [], "table_ref": [], "text": "SAMSum Dataset SAMSum is a dataset designed generative dialogue summarization, commonly employed in summarization tasks. The dataset consists of 14,732/818/819 samples in the training, development, and test sets, respectively. During testing, a specific speaker is randomly selected, and the model is tasked with forgetting all dialogues containing information about that selected speaker." }, { "figure_ref": [], "heading": "Conclusion & Future Directions", "publication_ref": [ "b28", "b29", "b30", "b31", "b32", "b29", "b33", "b29", "b3", "b34" ], "table_ref": [], "text": "Knowledge unlearning is a technique used to refine the internal storage of knowledge in large language models, aiming to prevent the generation of harmful information during use and safeguard ordinary users from potential harm. This paper provides an overview of research on knowledge unlearning for LLMs, categorizes existing methods, explains their principles and characteristics, and summarizes evaluation datasets and tasks employed in existing studies. Finally, challenges and future prospects in this domain is analyzed. Our goal is to encourage further research in this area and promote the responsible development of LLMs. Although the knowledge unlearning technique for large language models has great potential, it is still in the early stage of exploration and there is insufficient research. For example:\n• There is a risk of catastrophic unlearning, especially in scenarios involving continuous un-learning request. Since most methods have been tested only on a limited retention set, while this small-scale test set indicates that the model has not experienced catastrophic forgetting, it may still lead to untested knowledge deficiencies.\n• The cross-lingual and cross-modal generalization of forgetting methods is important. For multilingual models like mBERT [29], GPT-4 [30], and LLaMA-2 [31], which store cross-lingual knowledge of hundreds of languages and share a unified representation space among different languages, it is necessary to consider whether the knowledge unlearning for one language (such as English) will generalize to other languages (such as German, French, and Chinese). Similarly, for multimodal models like CLIP [32], BLIP-2 [33], and GPT-4 [30], the cross-modal effects of forgetting methods need to be considered.\n• Forgetting knowledge on specific topics while ensuring fundamental language understanding within that topic remains intact is a critical focus for future research.\nIn addition, our survey results show that current research on knowledge unlearning is primarily focused on pre-trained language models. However, for open-domain question answering LLMs like Chat-GPT [34], GPT-4 [30], LLAMA-2 [4], and BaiChuan [35], relevant unlearning methods need to consider parameter scale and evaluation data. Currently, only LLMU and AU have conducted exploratory studies using gradient-based methods. On the other hand, for closed-source models like ChatGPT and GPT-4, especially considering the black-box nature under restricted access conditions where only the input and output of the model are accessible, it is important to focus on knowledge unlearning methods for these black-box models." } ]
Large language models (LLMs) have spurred a new research paradigm in the field of natural language processing in recent years. Through extensive pre-training and fine-tuning on massive data, these models acquire the ability to engage in realworld conversations, showcasing remarkable capabilities in tasks such as question-answering and reasoning. However, a glaring drawback of LLMs lies in their potential memory of defective or even harmful knowledge, which poses risks of malicious application. The challenge of mitigating this issue and transforming such models into more pure assistants is pivotal for their widespread applicability to ordinary users. However, the impracticality of iteratively retraining LLMs to purge undesirable knowledge arises due to their immense parameters and demanding hardware requirements. Knowledge unlearning, derived from analogous studies on machine unlearning, presents a promising avenue to address this concern and is notably advantageous in the context of LLMs. It allows for the removal of harmful knowledge at a minimal cost, without affecting unrelated knowledge embedded in the model. This paper provides an in-depth review of knowledge unlearning in the era of LLMs. Firstly, we formally define the knowledge unlearning problem and distinguish it from related works. Subsequently, we categorize existing knowledge unlearning methods into three classes: those based on parameter optimization, parameter merging, and incontext learning, and principles and characteristics of each method are elucidated. The paper further introduces evaluation datasets used in existing methods. Finally, a comprehensive analysis of ongoing Figure 1: Knowledge unlearning is used to eliminate harmful, privacy-sensitive, and copyright-related information from LLMs, ensuring the generation of reasonable responses in model output. Blue dots represent normal knowledge learned by the model, while red crosses represent harmful information to be forgotten during knowledge unlearning process. challenges in this domain is presented, along with the research and application opportunities.
Knowledge Unlearning for LLMs: Tasks, Methods, and Challenges
[ { "figure_caption": "2 )2Locality. The unlearned model F (•; θ ′ ) should remain close to the original model F (•; θ) on retention set D r : min θ ′ d (F (D r ; θ) ; F (D r ; θ ′ )) In above formula, I(•) is the distance function of two probability distributions, such as KL-divergence.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Transformer module with unlearning layer", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Computation and unlearning of the task vector. Left: computation of task vector. Right: Negation of the task vector to obtain the unlearning direction.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Details and comparisons of different methods", "figure_data": "CategoryMethodStrategyModel & Task", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Statistics of the evaluation datasets.", "figure_data": "DatasetNum. of sample Train/Dev/TestTaskApplicationRelated worksPart of Training Data Extraction Challenge15000generationprivacy information protectionKULIMDB20000/2000/25000 classification privacy information protectionEULSAMSum14732/818/819generationprivacy information protectionEULPart of PKU-SafeRLHF + TruthfulQA-generationunlearning harmful contentLLMUHarry Potter + BookCorpus-generationcopyright content protectionLLMUPart of HaluEval + TruthfulQA-generationunlearning model hallucinationLLMUPart of Enron25000classification privacy information protectionDEPNPart of Civil Comments1000classificationunlearning harmful contentCPEM, TA", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" } ]
Nianwen Si; Hao Zhang; Heyu Chang; Wenlin Zhang; Dan Qu; Weiqiang Zhang
[ { "authors": "Lucas Bourtoule; Varun Chandrasekaran; Christopher A Choquette-Choo; Hengrui Jia; Adelin Travers; Baiwu Zhang; David Lie; Nicolas Papernot", "journal": "", "ref_id": "b0", "title": "Machine unlearning", "year": "2020" }, { "authors": "Heng Xu; Tianqing Zhu; Lefeng Zhang; Wanlei Zhou; Philip S Yu", "journal": "ACM Comput. Surv", "ref_id": "b1", "title": "Machine unlearning: A survey", "year": "2023-08" }, { "authors": "B Tom; Benjamin Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Sandhini Askell; Ariel Agarwal; Gretchen Herbert-Voss; Tom Krueger; Rewon Henighan; Aditya Child; Daniel M Ramesh; Jeffrey Ziegler; Clemens Wu; Christopher Winter; Mark Hesse; Eric Chen; Mateusz Sigler; Scott Litwin; Benjamin Gray; Jack Chess; Christopher Clark; Sam Berner; Alec Mccandlish; Ilya Radford; Dario Sutskever; Amodei", "journal": "", "ref_id": "b2", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar; Aurelien Rodriguez; Armand Joulin; Edouard Grave; Guillaume Lample", "journal": "", "ref_id": "b3", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Nicholas Carlini; Florian Tramer; Eric Wallace; Matthew Jagielski; Ariel Herbert-Voss; Katherine Lee; Adam Roberts; Tom Brown; Dawn Song; Ulfar Erlingsson; Alina Oprea; Colin Raffel", "journal": "", "ref_id": "b4", "title": "Extracting training data from large language models", "year": "2021" }, { "authors": "Nicholas Carlini; Daphne Ippolito; Matthew Jagielski; Katherine Lee; Florian Tramer; Chiyuan Zhang", "journal": "", "ref_id": "b5", "title": "Quantifying memorization across neural language models", "year": "2023" }, { "authors": "Haonan Yan; Xiaoguang Li; Ziyao Guo; Hui Li; Fenghua Li; Xiaodong Lin", "journal": "Main Track", "ref_id": "b6", "title": "Arcane: An efficient architecture for exact machine unlearning", "year": "2022" }, { "authors": "Anvith Thudi; Hengrui Jia; Ilia Shumailov; Nicolas Papernot", "journal": "", "ref_id": "b7", "title": "On the necessity of auditable algorithmic definitions for machine unlearning", "year": "2022" }, { "authors": "Da Yu; Saurabh Naik; Arturs Backurs; Sivakanth Gopi; Huseyin A Inan; Gautam Kamath; Janardhan Kulkarni; Yin Tat Lee; Andre Manoel; Lukas Wutschitz; Sergey Yekhanin; Huishuai Zhang", "journal": "", "ref_id": "b8", "title": "Differentially private finetuning of language models", "year": "2022" }, { "authors": "Xuechen Li; Florian Tramèr; Percy Liang; Tatsunori Hashimoto", "journal": "", "ref_id": "b9", "title": "Large language models can be strong differentially private learners", "year": "2022" }, { "authors": "Ayush Sekhari; Jayadev Acharya; Gautam Kamath; Ananda Theertha; Suresh ", "journal": "", "ref_id": "b10", "title": "Remember what you want to forget: Algorithms for machine unlearning", "year": "2021" }, { "authors": "Jiaao Chen; Diyi Yang", "journal": "", "ref_id": "b11", "title": "Unlearn what you want to forget: Efficient unlearning for llms", "year": "2023" }, { "authors": "M Daniel; Nisan Ziegler; Jeffrey Stiennon; Tom B Wu; Alec Brown; Dario Radford; Paul Amodei; Geoffrey Christiano; Irving", "journal": "", "ref_id": "b12", "title": "Finetuning language models from human preferences", "year": "2020" }, { "authors": "Zheng Yuan; Hongyi Yuan; Chuanqi Tan; Wei Wang; Songfang Huang; Fei Huang", "journal": "", "ref_id": "b13", "title": "Rrhf: Rank responses to align language models with human feedback without tears", "year": "2023" }, { "authors": "Nicola De Cao; Wilker Aziz; Ivan Titov", "journal": "", "ref_id": "b14", "title": "Editing factual knowledge in language models", "year": "2021" }, { "authors": "Eric Mitchell; Charles Lin; Antoine Bosselut; Chelsea Finn; Christopher D Manning", "journal": "", "ref_id": "b15", "title": "Fast model editing at scale", "year": "2022" }, { "authors": "Kevin Meng; Sen Arnab; Alex Sharma; Yonatan Andonian; David Belinkov; Bau", "journal": "", "ref_id": "b16", "title": "Massediting memory in a transformer", "year": "2023" }, { "authors": "Lingzhi Wang; Tong Chen; Wei Yuan; Xingshan Zeng; Kam-Fai Wong; Hongzhi Yin", "journal": "", "ref_id": "b17", "title": "Kga: A general machine unlearning framework based on knowledge gap alignment", "year": "2023" }, { "authors": "Joel Jang; Dongkeun Yoon; Sohee Yang; Sungmin Cha; Moontae Lee; Lajanugen Logeswaran; Minjoon Seo", "journal": "", "ref_id": "b18", "title": "Knowledge unlearning for mitigating privacy risks in language models", "year": "2022" }, { "authors": "Yuanshun Yao; Xiaojun Xu; Yang Liu", "journal": "", "ref_id": "b19", "title": "Large language model unlearning", "year": "2023" }, { "authors": "Xinwei Wu; Junzhuo Li; Minghui Xu; Weilong Dong; Shuangzhi Wu; Chao Bian; Deyi Xiong", "journal": "", "ref_id": "b20", "title": "Depn: Detecting and editing privacy neurons in pretrained language models", "year": "2023" }, { "authors": "Ronen Eldan; Mark Russinovich", "journal": "", "ref_id": "b21", "title": "Who's harry potter? approximate unlearning in llms", "year": "2023" }, { "authors": "Gabriel Ilharco; Marco Tulio Ribeiro; Mitchell Wortsman; Suchin Gururangan; Ludwig Schmidt; Hannaneh Hajishirzi; Ali Farhadi", "journal": "", "ref_id": "b22", "title": "Editing models with task arithmetic", "year": "2023" }, { "authors": "Jinghan Zhang; Shiqi Chen; Junteng Liu; Junxian He", "journal": "", "ref_id": "b23", "title": "Composing parameter-efficient modules with arithmetic operations", "year": "2023" }, { "authors": "Martin Pawelczyk; Seth Neel; Himabindu Lakkaraju", "journal": "", "ref_id": "b24", "title": "In-context unlearning: Language models as few shot unlearners", "year": "2023" }, { "authors": "Mukund Sundararajan; Ankur Taly; Qiqi Yan", "journal": "", "ref_id": "b25", "title": "Axiomatic attribution for deep networks", "year": "2017" }, { "authors": "Edward J Hu; Yelong Shen; Phillip Wallis; Zeyuan Allen-Zhu; Yuanzhi Li; Shean Wang; Lu Wang; Weizhu Chen", "journal": "", "ref_id": "b26", "title": "Lora: Low-rank adaptation of large language models", "year": "2021" }, { "authors": "Haokun Liu; Derek Tam; Mohammed Muqeeth; Jay Mohta; Tenghao Huang; Mohit Bansal; Colin Raffel", "journal": "", "ref_id": "b27", "title": "Few-shot parameter-efficient finetuning is better and cheaper than in-context learning", "year": "2022" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b28", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": " Openai", "journal": "", "ref_id": "b29", "title": "", "year": "2023" }, { "authors": "Hugo Touvron; Louis Martin; Kevin Stone; Peter Albert; Amjad Almahairi; Yasmine Babaei; Nikolay Bashlykov; Soumya Batra; Prajjwal Bhargava; Shruti Bhosale; Dan Bikel; Lukas Blecher; Cristian Canton Ferrer; Moya Chen; Guillem Cucurull; David Esiobu; Jude Fernandes; Jeremy Fu; Wenyin Fu; Brian Fuller; Cynthia Gao; Vedanuj Goswami; Naman Goyal; Anthony Hartshorn; Saghar Hosseini; Rui Hou; Hakan Inan; Marcin Kardas; Viktor Kerkez; Madian Khabsa; Isabel Kloumann; Artem Korenev; Punit Singh Koura; Marie-Anne Lachaux; Thibaut Lavril; Jenya Lee; Diana Liskovich; Yinghai Lu; Yuning Mao; Xavier Martinet; Todor Mihaylov; Pushkar Mishra; Igor Molybog; Yixin Nie; Andrew Poulton; Jeremy Reizenstein; Rashi Rungta; Kalyan Saladi; Alan Schelten; Ruan Silva; Eric Michael Smith; Ranjan Subramanian; Ellen Xiaoqing; Binh Tan; Ross Tang; Adina Taylor; Jian Williams; Puxin Xiang Kuan; Zheng Xu; Iliyan Yan; Yuchen Zarov; Angela Zhang; Melanie Fan; Sharan Kambadur; Aurelien Narang; Robert Rodriguez; Sergey Stojnic; Thomas Edunov; Scialom", "journal": "", "ref_id": "b30", "title": "Llama 2: Open foundation and fine-tuned chat models", "year": "2023" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark; Gretchen Krueger; Ilya Sutskever", "journal": "", "ref_id": "b31", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Junnan Li; Dongxu Li; Silvio Savarese; Steven Hoi", "journal": "", "ref_id": "b32", "title": "Blip-2: Bootstrapping languageimage pre-training with frozen image encoders and large language models", "year": "2023" }, { "authors": " Openai", "journal": "", "ref_id": "b33", "title": "Chatgpt: Optimizing language models for dialogue", "year": "2022" }, { "authors": "Aiyuan Yang; Bin Xiao; Bingning Wang; Borong Zhang; Ce Bian; Chenxu Chao Yin; Da Lv; Dian Pan; Dong Wang; Fan Yan; Fei Yang; Feng Deng; Feng Wang; Guangwei Liu; Guosheng Ai; Haizhou Dong; Hang Zhao; Haoze Xu; Hongda Sun; Hui Zhang; Jiaming Liu; Jian Ji; Juntao Xie; Kun Dai; Lei Fang; Liang Su; Lifeng Song; Liyun Liu; Luyao Ru; Mang Ma; Mickel Wang; Mingan Liu; Nuolan Lin; Peidong Nie; Ruiyang Guo; Tao Sun; Tianpeng Zhang; Tianyu Li; Wei Li; Weipeng Cheng; Xiangrong Chen; Xiaochuan Zeng; Xiaoxi Wang; Xin Chen; Xin Men; Xuehai Yu; Yanjun Pan; Yiding Shen; Yiyu Wang; Youxin Li; Yuchen Jiang; Yupeng Gao; Zenan Zhang; Zhiying Zhou; Wu", "journal": "", "ref_id": "b34", "title": "Baichuan 2: Open large-scale language models", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 124.49, 628.61, 123.67, 16.73 ], "formula_id": "formula_0", "formula_text": "max θ d (F (D f ; θ) ; F (D f ; θ ′ ))" }, { "formula_coordinates": [ 5, 310.61, 546.39, 228.64, 22.28 ], "formula_id": "formula_1", "formula_text": "1 :D n → (F Dt , F Dn ), KL 2 :D f → (F * , F D f )" }, { "formula_coordinates": [ 6, 98.32, 587.88, 176.01, 30.32 ], "formula_id": "formula_2", "formula_text": "loss(f θ , x) = - n i=1 log (p θ (x i |x 1,2,...,i-1 ))" }, { "formula_coordinates": [ 7, 74.97, 571.05, 222.2, 24.6 ], "formula_id": "formula_3", "formula_text": "θ t+1 = θ t -ϵ 1 • ∇ θt loss f orget -ϵ 2 • ∇ θt loss mismatch -ϵ 3 • ∇ θt loss maintain" }, { "formula_coordinates": [ 7, 311.64, 535.1, 226.58, 9.65 ], "formula_id": "formula_4", "formula_text": "v generic : = v baseline -αReLU(v reinf orced -v baseline )" }, { "formula_coordinates": [ 8, 355.35, 467.31, 139.15, 13.91 ], "formula_id": "formula_5", "formula_text": "⊖θ negation lora = ⊖θ lora = {A, -B}" } ]
2023-11-27
[ { "figure_ref": [ "fig_1" ], "heading": "Introduction", "publication_ref": [ "b4", "b36", "b40", "b8", "b57", "b16", "b17", "b22", "b39", "b29", "b33" ], "table_ref": [], "text": "The success of large language models (LLMs) [5,37,41] in understanding and generating nuanced human text has inspired similar scaling endeavors in computer vision [9,11,39,58]. However, compared with image models, pretraining of large video models encounters constraints posed by large high-quality video datasets and computational resources. A prevalent method is transferring CLIP image Nevertheless, as the model size increases, fully fine-tuning video models is computationally expensive. The raises a critical question: How to effectively adapt large pre-trained image models, such as ViT-E [39] with its 4.4 billion parameters, to video understanding still remains a challenge.\nTo accommodate the rapid expansion in model size, Parameter-Efficient Fine-Tuning (PEFT) methods [17,18,23,40,57] which fine-tune a small part of parameters are proposed in nature language processing (NLP). Among these methods, adapter-based methods [20,30,34,55], lightweight modules inserted into pre-trained models, are widely used for video action recognition and text-video retrieval due to efficiency and adaptability. Nevertheless, adapter-based methods require backpropagation through the frozen layers of models, which yields unnecessary memory cost, as shown in Fig. 2 " }, { "figure_ref": [ "fig_1" ], "heading": "(Left).", "publication_ref": [ "b39", "b52", "b21", "b51", "b43", "b30", "b23", "b32", "b47", "b48", "b50", "b12", "b26", "b27", "b30", "b2", "b6", "b20", "b29", "b33", "b34", "b16", "b17", "b22", "b39" ], "table_ref": [], "text": "To further reduce memory usage, LST [40] first introduces a side network attached to the frozen pre-trained model for NLP tasks, as shown in Fig. 2 (Right), eliminating the need for backpropagation within the pre-trained models. A similar work [53] is adopted in computer vision, where a side network is used to predict mask proposals and attention bias for semantic segmentation. However, exploration of side networks in video understanding remains limited.\nIn this work, we introduce Side4Video, a novel and memory-efficient method for fine-tuning pre-trained image models for video understanding tasks. In addition, we explore the enhancements afforded by transferring a larger model to the task of video understanding. To be specific, we devise a spatial-temporal side network attached to frozen pre-trained models which receives multi-level spatial features from frozen ViT. Our Side4Video utilizes a divided spatial-temporal module to learn video representation which consists of temporal convolution, spatial selfattention and feed forward network in each block. Beyond simply opting for a low-dimensional side network to minimize memory usage, we investigate a variety of strategies to further conserve memory and bolster temporal reasoning capabilities, including removing [CLS] token in side network, memory-efficient temporal convolution, [CLS] token shift spatial attention. Thanks to this structure, our approach enables us to transfer ViT-E to video understanding tasks with a small amount of computational resources.\nContrary to previous PEFT methods which are applied to a single task, we evaluate our model on both unimodal and cross-modal video tasks (i.e., action recognition, and text-video retrieval) across six popular benchmarks (i.e., Something-Something (SS) V1&V2 [16], Kinetics-400 [22], MSR-VTT [52], MSVD [7], and VATEX [44]).\nOur contributions are summarized as follows: • We introduce an innovative method for memory-efficient fine-tuning of pre-trained image models on video tasks. • For action recognition, our method can achieve a 75% reduction in memory usage and a 2.2% increase in accuracy on SSV2, surpassing the previous Video Adapter [55].\nIn text-video retrieval, our method achieves a 30% memory reduction while improving the R@1 metric by 1.1 on MSR-VTT, compared to the classic CLIP4Clip [31]. • To our knowledge, this is the pioneering work in efficiently transferring a large image backbone, ViT-E/14, to video understanding tasks. By scaling up the model to ViT-E/14, which is 14.5 times larger than ViT-L/14, our model delivers state-of-the-art performance on both unimodal and cross-modal video tasks. CLIP for Video Understanding. Due to its impressive generalization ability, CLIP is extensively expanded to action recognition [24,33,48,49,51] and text-video retrieval [13,14,27,28,31,43,47]. However, these methods typically require fully fine-tuning the whole model, which is computationally intensive. To mitigate these issues, recent works [20, 21,30,34,35,55] extend the PEFT methods [17,18,23,40,57] from NLP to the video domain." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b33", "b52", "b34", "b34" ], "table_ref": [], "text": "Large\nFor action recognition, ST-adapter [34] and on memory reduction by implementing a lightweight transformer attached to pre-trained models for NLP tasks and SAN [53] leverages this technique to image semantic segmentation. These methods focus more on same modality and implement their side network by a lightweight transformer. Our work also explores cross-modal capability of side network. Note that several works [26,35] share the similar thoughts in video domains which avoid backpropagation through the pre-trained models. EVL [26] adopts a parallel transformer decoder to extract spatial features from frozen CLIP while DiST [35] uses an integration branch to fuse the features from spatial encoder and temporal encoder, which spatial encoder is frozen CLIP. As distinct from their approaches, we introduce a spatial-temporal side encoder to learn video representation which has better continuity and scalability. Furthermore, we successfully transfer a large model for video understanding tasks to explore the advantages brought by an increased model size." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Preliminary", "publication_ref": [ "b30", "b30", "b39" ], "table_ref": [], "text": "ViT splits an image I ∈ R H×W ×C into a sequence of non-overlapping patches and then project them into the embedding space as x e = [x 1 , x 2 , ..., x N ], x e ∈ R N ×D , where N denotes the number of patches and D is the hidden di-mension. Subsequently, ViT prepends a learnable [CLS] token x 0 to the x e = [x 0 ,x 1 ,x 2 ,...,x N ] and adds a positional embedding E pos to x e as Z 0 = x e + E pos , where Z 0 is the final input being fed to a sequence of transformer blocks.\nConsidering T frames f t of a video V = [f 1 ,f 2 ,...,f T ], our work focuses on fine-tuning a large pre-trained ViT for video understanding in a memory-efficient way. Adding adapters inside the frozen pre-trained model causes additional backpropagation through the large frozen pre-trained model. Posterior methods such as meanP [31] and Seq-Transf [31], which modeling spatial-temporal features after frozen ViT avoid above situation. However, posterior structures neglect low-level features which are important to video understanding tasks. Inspired by LST [40], we propose spatial-temporal side network which utilizes multilevel spatial features to memory-efficient transfer image models to video understanding tasks." }, { "figure_ref": [ "fig_3" ], "heading": "Overview", "publication_ref": [], "table_ref": [], "text": "We introduce Side4Video, a method that fully leverages multi-level features of ViT while avoiding backpropagation through the large pre-trained models. By freezing the pre-trained models and only updating the side network parameters, our approach significantly minimizes the memory footprint. Specifically, Side4Video is constructed as a lightweight spatial-temporal side network attached to pre-trained model, consisting of l layers in d dimensions. The side network is seamlessly integrated with the pretrained model, receiving multi layer features from ViT before each side block. Each Side4Video block is composed of temporal convolution, [CLS] token shift self-attention and MLP layer, as depicted in Fig. 3. Finally, the output Z out ∈ R T ×D from ViT's [CLS] token maintains the original zero-shot capability, while the output s out ∈ R T ×N ×D from side network captures comprehensive video information. We deploy Global Average Pooling (GAP) on s out to obtain a global video representation for action recognition, while preserving frame-level global representation to support fine-grained matching for text-video retrieval." }, { "figure_ref": [], "heading": "Side4Video", "publication_ref": [], "table_ref": [], "text": "Our Side4Video block is composed of temporal convolution, [CLS] token shift spatial self-attention and MLP layer.\nHere we describe Side4Video block in detail." }, { "figure_ref": [], "heading": "Remove [CLS] token in side network. The [CLS] token of", "publication_ref": [ "b44", "b45", "b0", "b28", "b44", "b18", "b1", "b24", "b27" ], "table_ref": [], "text": "ViT is the global image representation. In video domain, a common practice is to average [CLS] tokens of each frame as the final video representation. However, updating a learnable token increases the memory consumption. We find that GAP on patch tokens can achieve competitive performance while introducing extra [CLS] tokens in side network increase unnecessary memory footprint. Moreover, in order to enhance temporal modelling and harmonize the input paradigm Eq. ( 2) of each block, we use a 3D convolution project the video frames to sequence s 0 without additional [CLS] token, s 0 ∈ R T ×N ×d . Feature fusion on patch tokens. Side4Video effectively leverages multi-level spatial features of ViT. To achieve this, we implement a linear projection Down(•) to convert the D dimensional ViT features Z l out to d dimensional features z l out . Note that this projection function is applied to both [CLS] token and patch tokens at each layer and we only fuse ViT patch tokens features z l out and Side4Video features s l-1 out by element-wise addition. The [CLS] token will be used in spatial self-attention. The fusion strategy is:\nz l out = Down(N orm(Z l out )),(1)\ns l in = s l-1 out + z l out .(2)\nTemporal module in Side4Video. Convolution [6,42,45,46] and self-attention [1,3,29] are two popular way for temporal modeling. To minimize training memory cost, we investigate the impact of 3D convolution and temporal attention to memory footprint and performance, and detail is shown in Sec. 4.5. Although temporal attention is good at long-range modeling, temporal convolution is more memory-efficient and easy to convergence. Following MVFNet [45], we employ depth-wise separable temporal convolutions to further reduce memory. To simplify, the process starts with a 1 × 1 × 1 convolution as a pointwise convolution, then the 3 × 1 × 1 channel-wise temporal convolution followed by the 1 × 1 × 1 point-wise convolution to form the depth-wise separable convolution. We also find that 3D batch normalization [19] effectively enhance spatial-temporal modeling. We adopt batch normalization before temporal convolution and MLP layer and keep layer normalization [2] before self-attention.\n[CLS] token shift self-attention. Due to frozen pretrained [CLS] tokens contain global spatial features, we extend these works [25,28,59] by shifting the whole pre-trained [CLS] channels back-and-forth across adjacent frames. Then, we concatenate the shifted token to K, V , where K, V is the key and value in self-attention. In this case, Side4Video learn temporal information in [CLS] tokens with negligible memory increasing." }, { "figure_ref": [], "heading": "Side4Video for video understanding", "publication_ref": [ "b2", "b55" ], "table_ref": [], "text": "Given a video, the side network generates a video representation s out ∈ R T ×N ×d , for which we apply Global Average Pooling (GAP) over the patch tokens to obtain the final representation. We design two GAP methods to yield the final video representations for vision-only and crossmodal tasks, respectively. Side4Video for action recognition. Vision-only task requires models to pay more attention on spatial-temporal modeling to understand dynamic actions. Given that the frozen pre-trained ViT lacks temporal reasoning capabilities and Side4Video models spatial-temporal features, we obtain final video representation by performing global average pooling on the output of Side4Video:\ns f inal = 1 T × N t,n s out .(3)\nSide4Video for text-video retrieval. Unlike vision-only task, cross-modal task requires video and text models to learn a joint embedding space. CLIP, containing rich visiontext aligned knowledge, is widely used in text-video retrieval. Since the side network is random initialized, we leverage the powerful zero-shot capabilities of CLIP to stabilize training. Specifically, we first average over the patch tokens to obtain frame-level representations of side network. Then, we project the features back to the D-dim and aggregate them with [CLS] tokens from the ViT. Subsequently, we reuse the pre-trained projection layer P roj(•) to map the features into the joint embedding space, resulting in the final frame-level representations s f inal :\ns f inal = P roj(U p( 1 N n s out ) + Z out ).(4)\nSide4Video plays a role in enhancing spatial modeling and inject temporal information. Finally, we employ the advanced token-wise fine-grained matching [43,56] instead of simple global matching to generate similarity matrix for text-video retrieval." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experiment Settings", "publication_ref": [ "b21", "b51", "b43", "b35", "b9" ], "table_ref": [], "text": "Datasets. To demonstrate the effectiveness of our method, we conduct a comprehensive evaluation on two popular video understanding tasks, i.e., action recognition and textvideo retrieval. For action recognition, we employ three widely adopted benchmarks to evaluate our model, including Something-Something V1&V2 (SSV1 and SSV2) [16] and Kinetics-400 (K400) [22]. For text-video retrieval, we adopt three well-known benchmarks, including MSR-VTT [52], MSVD [7] and VATEX [44]. The statistics of these datasets are provided in Supplementary Material. Implementation Details. In this paper, we adpot OpenAI-CLIP [36] for ViT-B/16, ViT-L/14 and EVA-CLIP [39] for ViT-E/14. Following the ViT-E/14, we implement flash attention [10] and post normalization to maintain consistency. Tab. 1 presents the configuration of our model for action rcognition. By adjusting dimensions and the number of layers, our model balances memory usage with performance. For text-video retrieval, we construct 320-dimensions side networks with 12, 24, and 32 layers for ViT-B, ViT-L, and ViT-E, respectively. Constrained by a 40G memory limit, we only train a scale-down version of ViT-E. Although the lightweight model does not fully exploit ViT-E's capabilities, it still represents a notable advancement over the ViT-L. More details are provided in Supplementary Material." }, { "figure_ref": [], "heading": "Memory Comparison", "publication_ref": [ "b34", "b33", "b30" ], "table_ref": [], "text": "Tab. 2 presents the training memory usage and performance comparison with existing efficient fine-tuning methods on SSV2. For a fair comparison, we measure memory footprint within the same environment (A100, 80G), using 8 frames as model input. All the models are tested with 1 spatial crop and 3 temporal clips here. Benefiting from spatial-temporal side network, Our-B/16 yields a remarkable 70% reduction in memory consumption while simultaneously improving top-1 accuracy by 1.5% compared to ST-Adapter-B/16. Additionally, it is worth noting that another side-tuning like method DiST [35] tends to use more tunable parameters in contrast to adapter-based methods, i.e., 19M on DiST-B vs. 7M on ST-Adapter-B. However, Our-B/16 reduce the tunable parameters to 4M which is more parameter-efficient than ST-Adapter [34] 1.5% and 1.0% compared to DiST-B/16 and DiST-L/14. In addition, we also provide the memory comparison on textvideo retrieval in Tab. 3. Compared to CLIP4Clip [31], our method achieves a 30% reduction in memory usage while improving 1.1% Recall@1." }, { "figure_ref": [], "heading": "Comparisons on Action Recognition", "publication_ref": [ "b30" ], "table_ref": [], "text": "Results on Something-Something V1&V2. Tab. Video2Text R@1↑ R@5↑ R@10↑ MdR↓ MnR↓ R@1↑ R@5↑ R@10↑ MdR↓ MnR↓ Full Fine-tuning CLIP4Clip [31] CLIP-400M vs. 32). Scaling up pre-trained model to ViT-E, our model enhances accuracy by 1.6% over ViT-L/14, attaining an accuracy of 88.6%." }, { "figure_ref": [], "heading": "Comparisons on Text-Video Retrieval", "publication_ref": [ "b20", "b29" ], "table_ref": [], "text": "Beyond action recognition, we also evaluate our model on the cross-modal text-video retrieval task. Unlike other efficient fine-tuning methods [20,21,30] tailored exclusively for text-video retrieval, our work concentrates on the video component. Consequently, we keep the ViT frozen, opting to update only the side network and the text encoder. As a baseline for comparison, we present results from CLIP4Clip ( ViT), which similarly freezes the ViT and updates the text encoder." }, { "figure_ref": [], "heading": "Method", "publication_ref": [ "b27", "b6", "b30", "b6" ], "table_ref": [], "text": "Text2Video R@1↑ R@5↑ R@10↑ MdR↓ MnR↓ Full Fine-tuning CLIP2Video [14] 57.3 90.0 95.5 1.0 3.6 TS2-Net [28] 59.1 90.0 95.2 1.0 3.5 Cap4Video [47] 66.6 93. Performance on MSR-VTT, MSVD, and VATEX. As shown in Tab. 6, using a ViT-B/32 backbone, our model achieves a 1.1% improvement in Recall@1 while reducing memory consumption by 30% compared to the full finetuning approach of CLIP4Clip [31]. When considering methods with a frozen backbone, our approach surpasses the baseline-CLIP4Clip ( ViT)-by a substantial 4.2% in Recall@1. Regarding PEFT methods, our method shows notable performance enhancements. By scaling our model up to ViT-E/14, we set new state-of-the-art results with 52.3% on MSR-VTT, 56.1% on MSVD, and 68.8% on VA-TEX, exceeding the performance of prior SOTA Cap4Video [47] by margins of 0.9%, 4.3%, and 2.2%, respectively.\nMethod Backbone Top-1 Acc." }, { "figure_ref": [], "heading": "Top", "publication_ref": [], "table_ref": [], "text": "ViT-E/14 64.0% Interval ViT-E/14 64.7%\n(a) The impact of fusion layers. " }, { "figure_ref": [], "heading": "Ablation study", "publication_ref": [], "table_ref": [], "text": "As shown in Tab. 9, we conduct ablation on Something-Something V1 dataset. The impact of fusion layers. With memory limitations (A100 40GB), we deploy a 32-layer Side4Video for the 64layer ViT-E/14. Hence, we explore how fusing features at varying depths impacts performance. We evaluate two fusion strategies: top and interval. The top strategy integrates high-level features from the 32nd to 64th layer while interval method fuses multi-level features every 2 layers from the beginning, i.e. the 2nd, 4th, ..., 64th layers. The results in Tab. 9a reveal that interval-based method yields a 0.7% improvement in accuracy over top-based method. These findings suggest that multi-level fusion is more beneficial for video understanding tasks. The study of different temporal components. Although self-attention specializes in long-range modeling, it is more data-hungry and memory-inefficient compared to convolution. The results in Tab. 9b show that temporal convolution reaches higher accuracy, with an increase of 0.6% on SSV1.\n[CLS] token shift self-attention. The token shift technique learns features of adjacent frames without an increase in memory footprint. In ViT, the [CLS] tokens summarize the spatial information of each frame. Leveraging this, we employ [CLS] token shift to enhance the temporal reasoning capability of our model. We show the effectiveness of [CLS] token shift in Tab. 9c which brings 0.6% accuracy improvement with a negligible increase in memory consumption due to the number of K,V turn to N +1. Exploration of video representation. We explore multiple methods to obtain the final video representation, including the use of the [CLS] token within ViT, the incorporation of an additional [CLS] token in the side network, and the application of GAP. For the first method, we concatenate the dimensionality-reduced [CLS] token to the beginning of the input sequence for the side network. Note that we cannot update [CLS] token parameters which will lead to backpropagation through the pre-trained models while extra [CLS] token obtain poor performance. According to Tab. 9d, GAP reaches both the highest performance and the least memory footprint. The reason for this performance gap between GAP and [CLS] token may be due to the learning-rate settings as mentioned in [12]. In conclusion, we adopt GAP for the final representation. The impact of layers and dimension By adjusting layers l and dimension d, we can control the memory consumption and performance of our model. Increasing l or d both enhances the complexity of models and memory usage, but their effects to the model are different. Increasing l enables model to utilize more diverse level features from ViT while increasing d enhances modeling ability of each layer. In study of layers, we base on interval fusion strategy mentioned above for 4 layers and 6 layers. As shown in Tab. 9e, increasing the layers from 4 to 12, we can see the performance gradually boost. The model fused features from all layers achieving the highest accuracy of 58.5%. The results in Tab. 9f show that a dimension of 320 of our model achieves the best performance. A comparative evaluation of the significance of d versus l reveals that our model with 4 layers at 320 dimensions outperforms 12 layers with 128 dimensions, under a comparable memory footprint. In conclusion, given equivalent memory usage constraints, we prefer a higher dimensional side network to a deeper one." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, our motivation is to transfer large pertrained image models to video understanding tasks. To this end, we introduce Side4Video for memory-efficient image-to-video transfer learning for video understanding. Side4Video receives multi-level features from frozen ViT that avoids backpropagation through the pre-trained models. We achieve better performance than previous efficient fine-tuning methods. Scaling up model size, we transfer a huge pre-trained model (i.e., ViT-E) to video understanding tasks and observe its notable improvement. In the era of large models, we hope our work can inspire researchers who desire to fine-tune larger models in limited resources." }, { "figure_ref": [], "heading": "Side4Video: Spatial-Temporal Side Network for Memory-Efficient Image-to-Video Transfer Learning", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [], "table_ref": [], "text": "A. Data Efficiency " }, { "figure_ref": [ "fig_7" ], "heading": "C. Visualization", "publication_ref": [], "table_ref": [], "text": "In Fig. 6, we present visualizations of the attention maps generated by CLIP and Side4Video. These illustrations demonstrate that our model more precisely concentrates on dynamically moving target objects. Significantly, as observed in the second frame, even when only a portion of the basketball is in view, our model proficiently traces the basketball's trajectory which showcases the spatial-temporal capability of our model." }, { "figure_ref": [], "heading": "D. More Results on Text-Video Retrieval", "publication_ref": [], "table_ref": [], "text": "Tab. 10 and Tab. 11 present more results on MSVD and VATEX, respectively. Our method also exhibits excellent performance on video-to-text retrieval. Additionally, we observe a phenomenon where Our L/14 outperforms Our E/14 on video-to-text retrieval, which may be attributed to the pre-training data and the backbones." }, { "figure_ref": [], "heading": "Method Pretrain", "publication_ref": [ "b30" ], "table_ref": [], "text": "Text2Video Video2Text R@1↑ R@5↑ R@10↑ MdR↓ MnR↓ R@1↑ R@5↑ R@10↑ MdR↓ MnR↓ Full Fine-tuning CLIP4Clip [31] CLIP-400M " }, { "figure_ref": [], "heading": "E. Additional Implementation Details", "publication_ref": [ "b21", "b51", "b43", "b14", "b30", "b31", "b35", "b21", "b30" ], "table_ref": [], "text": "Dataset. We evaluate our model on two video understanding tasks, i.e., action recognition and text-video retrieval, to demonstrate the effectiveness of our approach.\nFor action recognition, we employ three widely adopted benchmarks to evaluate our model, including Something-Something V1&V2 (SSV1 and SSV2) [16] and Kinetics-400 (K400) [22]. Temporal-related datasets SSV1 and SSV2 contain 110K videos and 220k videos in 174 classes. Scene-based dataset K400 is a large-scale video dataset comprising 300K video clips in 400 human action classes.\nFor text-video retrieval, we adopt three well-known benchmarks, including MSR-VTT [52], MSVD [7] and VA-TEX [44]. MSR-VTT consists of 10k videos with 20 textual descriptions for each video. We split the dataset following [15,31,32], which includes 9K videos for training and 1K videos for testing. MSVD consists of 1970 video clips with approximately 80K descriptive sentences, where train, validation, and test sets are split into 1200, 100, and 670 videos. VATEX is a relatively large dataset, containing 34,991 videos with multiple annotations. There are 25,991 videos for training, 1,500 videos for validation, and 15,500 videos for testing. Implementation Details. All experiments are implemented in PyTorch. For both action recognition and text-video retrieval, we employ OpenAI-CLIP [36] for ViT-B/16, ViT-L/14, and EVA-CLIP [39] for ViT-E/14.\nFor action recognition, Tab. 12 presents the configuration list for Kinetics-400 [22] and Something-Something V1&V2 [16].\nFor text-video retrieval, we freeze all the ViT encoders except the final linear projection and update Side4Video and text encoder parameters. We construct Side4Video with 12, 24, and 32 layers for ViT-B/16, ViT-L/14, and ViT-E/14, respectively. For all models, the dimensionality of the side network is set to 320. Following CLIP4Clip [31], we use a unified training setting for all the datasets (i.e., MSR-VTT" }, { "figure_ref": [], "heading": "", "publication_ref": [ "b43" ], "table_ref": [], "text": "[52], MSVD [7] and VATEX [44]). We set text length to 32 and video length to 12. We train our model with a batch size of 128 for 5 epochs with Adam (β 1 = 0.9, β 2 = 0.98) optimizer. The initial learning rate is 1e-7 for CLIP module and 1e-4 for new modules." } ]
Large pre-trained vision models achieve impressive success in computer vision. However, fully fine-tuning large models for downstream tasks, particularly in video understanding, can be prohibitively computationally expensive. Recent studies turn their focus towards efficient image-tovideo transfer learning. Nevertheless, existing efficient finetuning methods lack attention to training memory usage and exploration of transferring a larger model to the video domain. In this paper, we present a novel Spatial-Temporal Side Network for memory-efficient fine-tuning large image models to video understanding, named Side4Video. Specifically, we introduce a lightweight spatial-temporal side network attached to the frozen vision model, which avoids the backpropagation through the heavy pre-trained model and utilizes multi-level spatial features from the original image model. Extremely memory-efficient architecture enables our method to reduce 75% memory usage than previous adapter-based methods. In this way, we can transfer a huge ViT-E (4.4B) for video understanding tasks which is 14× larger than ViT-L (304M). Our approach achieves remarkable performance on various video datasets across unimodal and cross-modal tasks (i.e., action recognition and text-video retrieval), especially in Something-Something V1&V2 (67.3% & 74.6%), Kinetics-400 (88.6%), MSR-VTT (52.3%), MSVD (56.1%) and VATEX (68.8%). We release our code at https://github.com/HJYao00/ Side4Video.
Side4Video: Spatial-Temporal Side Network for Memory-Efficient Image-to-Video Transfer Learning
[ { "figure_caption": "Figure 1 .1Figure 1. Comparison of GPU memory usage for training across backbones of varying parameter scales against previous efficienttraining methods.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Illustration of Adapter-based and Side-Tuning method.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Vision Model. The advent ofViT [12] signaled a leap forward in the pre-training of large-scale vision models, distinguished by their transferability and scalability. The CLIP model[36], pre-trained on 400 million image-text pairs, has garnered significant interest due to its remarkable generalization capabilities and its ability to align knowledge across visual and textual domains. Building on CLIP's success, later works[9, 39,54] have expanded on the size of both datasets and models, further augmenting CLIP's representational capability. A noteworthy work is EVA-CLIP [39], which leverages LAION-2B [38] consisting of 2.32 billion image-text pairs, to pre-train a 64-layer ViT-E/14 with 4.4B parameters, achieving impressive results. Yet, the efficient adaptation of such huge image models to the video domains is extremely expensive and rarely explored.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Illustration of our Side4Video for video understanding. (a) An overview of our Side4Video video framework. (b) The details of our Side4Video block. (c) Application of Side4Video in action recognition, and (d) its use in text-video retrieval.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 44Fig. 4 illustrates the impact of varying training dataset sizes on the performance of our Side4Video. Our model showcases remarkable data efficiency compared with other methods. For example, with only 5% of the Something-Something V2 dataset, Our B/16 model attains a Top-1 accuracy of 48.1%, which is approximately 13% higher than DiST B/16. When scaling up the backbone to ViT-E/14, our model achieves an impressive accuracy rate of 60.2% with 5% of the training data.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Comparison on data efficiency.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Comparison of CLIP and Side4Video accuracy per category on Kinetics-400. Here we only report the 10 worst category results of CLIP.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Visualization of Side4Video attention map. We visualize an action \"dribbling basketball\" from Kinetics-400 dataset.", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "and AIM[55]. Compared with DiST, Our-B/16 and Our-L/14 save 30% and 16% memory compared to DiST-B/16 and DiST-L/14 while achieving comparable performance. Furthermore, our method exhibits excellent scalability. Scaling up our model by increasing l and d, Our-B/16 and Our-L/14 achieve the highest accuracy rate of 70.2% and 71.8%, improving by Model configurations. We probe the performance of our model at various scales by manipulating its dimensions and the number of layers. denotes the lightweight version.", "figure_data": "MethodsViT layers dim layers dim Side4VideoOur-B/16127686192Our-B/161276812320Our-L/1424102412320Our-L/1424102424512Our-E/1464179232576MethodsGFLOPs TP (M) Mem (G) SSV2 (%)ViT-B/16ST-Adapter [34]455728.867.1AIM [55]6241435.266.4EVL [26]5128917.961.0DiST [35]4801912.768.7Ours44548.968.6Ours5282118.870.2ViT-L/14ST-Adapter [34] 20622051.470.0AIM [55]28775064.367.6EVL [26]241135033.065.1DiST [35]21303218.170.8Ours20922215.370.6Ours261110237.071.8", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Memory usage and performance comparison on action recognition. The memory footprint comparison of ViT-B/16 and ViT-L/14 with a batch size of 32 and 16. \"TP\" and \"Mem\" denotes the number of tunable parameters and the training memory usage.", "figure_data": "MethodsMemory (G) R@1 (%)CLIP4Clip [31]8.243.1CLIP4Clip ( ViT)3.840.0Ours5.844.2", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Memory usage comparison on MSR-VTT text-to-video retrieval. Backbone: ViT-B/16. denotes frozen image encoder.", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparison with SOTAs on Something-Something V1&V2. Views = # spatial crops × # temporal clips.", "figure_data": "MethodBackbonesPre-trainFrames×Views TFLOPs V1 Top-1(%) V1 Top-5(%) V2 Top-1(%) V2 Top-5(%)Full Fine-tuningViViT [1]L/16×2 FE IN-21K+K40032×3×41.0×12--65.989.9Video Swin [29]Swin-B IN-21K+K40032×3×10.32×3--69.692.7UniFormerV2 [24] ViT-L/14 CLIP-400M32×3×11.73×362.788.073.094.5ATM [48]ViT-L/14 CLIP-400M16×3×20.84×664.088.073.593.7ATM [48]ViT-L/14Merged-2B16×3×20.84×665.688.674.694.4Frozen backboneEVL-L/14 [26]ViT-L/14 CLIP-400M32×1×33.21×3--66.7-ST-Adapter [34]ViT-L/14 CLIP-400M32×1×32.75×3--72.393.9AIM [55]ViT-L/14 CLIP-400M32×1×33.84×3--70.692.7DiST [35]ViT-L/14 CLIP-400M32×1×32.83×3--73.193.2OursViT-B/16 CLIP-400M8×3×20.18×659.484.870.692.5OursViT-B/16 CLIP-400M16×3×20.36×660.786.071.592.8OursViT-L/14 CLIP-400M8×3×20.87×661.086.771.993.5OursViT-L/14 CLIP-400M16×3×21.74×662.488.173.293.9OursViT-E/14LAION-2B8×3×27.98×665.388.574.394.0OursViT-E/14LAION-2B16×3×215.96×667.388.875.294.0MethodBackbonesPre-trainFrames × ViewsTFLOPsTop-1(%)Top-5(%)Full Fine-tuningViViT [1]H/14×2IN-21K32×3×43.98×1284.995.8Text4Vis † [50]ViT-L/14CLIP-400M32×3×41.66×1287.697.8BIKE † [51]ViT-L/14CLIP-400M16×3×40.83×1288.197.9ATM [48]ViT-L/14CLIP-400M32×3×41.68×1288.097.6Frozen backboneEVL [26]ViT-L/14CLIP-400M32×1×32.69×387.3-ST-Adapter [34]ViT-L/14CLIP-400M32×1×32.75×387.297.6AIM [55]ViT-L/14CLIP-400M32×1×33.74×387.597.7DiST † [35]ViT-L/14CLIP-400M32×1×32.83×388.097.9OursViT-B/16CLIP-400M8×3×40.18×1283.696.0OursViT-B/16CLIP-400M16×3×40.36×1283.996.3OursViT-B/16CLIP-400M32×3×40.72×1284.296.5OursViT-L/14CLIP-400M8×3×40.87×1286.697.4OursViT-L/14CLIP-400M16×3×41.74×1287.097.5OursViT-E/14LAION-2B8×3×47.98×1288.398.0OursViT-E/14LAION-2B16×3×415.96×1288.698.2", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Comparison with SOTAs on Kinetics-400. † represents the method utilizes textual knowledge from text encoder of CLIP. Views = # spatial crops × # temporal clips.", "figure_data": "SSV1 and SSV2. For example, Side4Video with 16 framesperformance improvement as the model size increases.achieve comparable results with UniFormerV2 [24] with32 frames (62.4% vs. 62.7% on SSV1, 73.2% vs. 73.0%Results on Kinetics-400. Tab. 5 presents the performanceon SSV2). Moreover, Side4Video surpasses all the frozencomparison on Kinetics-400. We conduct similar resultsbackbone methods and Our L/14 outperforms ST-Adapterwith SSv1 and SSv2. On ViT-L/14, our model with an inputand AIM by 0.9% and 2.6% on SSV2. Scaling up back-bone to ViT-E/14, we reach the highest accuracy of 67.3%on SSV1 and 75.2% on SSV2. We observe an impressive", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Results on MSR-VTT 1K. CLIP4Clip ( ViT) is our implementation with frozen image encoder. We report the results without any extra tricks (e.g., DSL[8] or QB-Norm[4]) during inference.", "figure_data": "43.170.480.82.016.243.170.581.22.012.4CLIP2Video [14]CLIP-400M 45.672.681.72.014.643.572.382.12.010.2STAN [27]CLIP-400M 50.075.284.11.5------Cap4Video [47]CLIP-400M 51.475.783.91.012.449.075.285.02.08.0Frozen backboneCLIP4Clip ( ViT) CLIP-400M 40.067.878.42.017.741.168.978.72.012.4CLIP-Prompt [21] CLIP-400M 36.764.676.82.0------CM Adapter [20]CLIP-400M 45.473.382.3-12.846.273.683.8-8.6UniAdapter [30]BLIP-129M 50.573.981.71.0------Our B/32CLIP-400M 44.271.181.02.015.144.672.382.32.09.4Our B/16CLIP-400M 47.273.883.72.013.146.675.884.32.07.9Our L/14CLIP-400M 51.475.884.51.012.550.077.185.91.57.0Our E/14LAION-2B52.375.584.21.012.850.477.486.01.57.1MethodText2Video R@1↑ R@5↑ R@10↑ MdR↓ MnR↓Full Fine-tuningCLIP4Clip [31]46.2 76.1 84.62.0 10.0STAN [27]51.5 80.4 88.51.0-Cap4Video [47]51.8 80.8 88.31.08.3Frozen backboneCLIP4Clip ( ViT) 43.8 73.3 82.72.0 11.1CM Adapter [20]47.4 76.6 85.0-10.2Our B/3244.6 74.9 83.52.0 10.2Our B/1649.0 78.5 86.72.09.1Our L/1454.9 82.1 89.31.07.5Our E/1456.1 81.7 88.81.08.4", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "", "figure_data": "1 97.01.02.7", "figure_id": "tab_6", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Ablation studies of Side4Video on Something-Something V1. Unless otherwise specified, all models use ViT-B/16 with 8 frames under the single view protocol.", "figure_data": "", "figure_id": "tab_8", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Results on MSVD. CLIP4Clip ( ViT) is our implementation with frozen image encoder. We report the results without any extra tricks (e.g., DSL[8] or QB-Norm[4]) during inference.", "figure_data": "46.276.184.62.010.056.679.784.31.07.6STAN [27]CLIP-400M 51.580.488.51.0------Cap4Video [47]CLIP-400M 51.880.888.31.08.3-----Frozen backboneCLIP4Clip ( ViT) CLIP-400M 43.873.382.72.011.157.079.284.91.012.1CM Adapter [20]CLIP-400M 47.476.685.0-10.263.690.094.7-3.0Our B/32CLIP-400M 44.674.983.52.010.258.183.690.11.07.7Our B/16CLIP-400M 49.078.586.72.09.162.385.389.81.07.7Our L/14CLIP-400M 54.982.189.31.07.571.794.497.91.02.3Our E/14LAION-2B56.181.788.81.08.465.990.695.41.02.5MethodPretrainText2Video R@1↑ R@5↑ R@10↑ MdR↓ MnR↓ R@1↑ R@5↑ R@10↑ MdR↓ MnR↓ Video2TextFull Fine-tuningCLIP2Video [14]CLIP-400M 57.390.095.51.03.6-----TS2-Net [28]CLIP-400M 59.190.095.21.03.5-----Cap4Video [47]CLIP-400M 66.693.197.01.02.7-----Frozen backboneCLIP4Clip ( ViT) CLIP-400M 55.787.693.81.04.276.398.099.21.01.6CM Adapter [20]CLIP-400M 59.389.895.2-3.574.797.299.1-1.6Our B/32CLIP-400M 58.189.294.81.03.777.297.699.11.01.6Our B/16CLIP-400M 61.791.596.01.03.179.098.099.51.01.5Our L/14CLIP-400M 67.993.997.31.02.682.399.499.91.01.3Our E/14LAION-2B68.893.597.01.02.779.798.999.81.01.4", "figure_id": "tab_9", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Results on VATEX. CLIP4Clip ( ViT) is our implementation with frozen image encoder. We report the results without any extra tricks (e.g., DSL[8] or QB-Norm[4]) during inference.", "figure_data": "", "figure_id": "tab_10", "figure_label": "11", "figure_type": "table" } ]
Huanjin Yao; Wenhao Wu; Zhiheng Li
[ { "authors": "Anurag Arnab; Mostafa Dehghani; Georg Heigold; Chen Sun; Mario Lučić; Cordelia Schmid", "journal": "", "ref_id": "b0", "title": "Vivit: A video vision transformer", "year": "2021" }, { "authors": "Jimmy Lei Ba; Jamie Ryan Kiros; Geoffrey E Hinton", "journal": "", "ref_id": "b1", "title": "Layer normalization", "year": "2016" }, { "authors": "Gedas Bertasius; Heng Wang; Lorenzo Torresani", "journal": "", "ref_id": "b2", "title": "Is space-time attention all you need for video understanding", "year": "2021" }, { "authors": "Simion-Vlad Bogolin; Ioana Croitoru; Hailin Jin; Yang Liu; Samuel Albanie", "journal": "", "ref_id": "b3", "title": "Cross modal retrieval with querybank normalisation", "year": "2022" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "NeurIPS", "ref_id": "b4", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Joao Carreira; Andrew Zisserman", "journal": "", "ref_id": "b5", "title": "Quo vadis, action recognition? a new model and the kinetics dataset", "year": "2017" }, { "authors": "David Chen; William B Dolan", "journal": "", "ref_id": "b6", "title": "Collecting highly parallel data for paraphrase evaluation", "year": "2011" }, { "authors": "Xing Cheng; Hezheng Lin; Xiangyu Wu; Fan Yang; Dong Shen", "journal": "", "ref_id": "b7", "title": "Improving video-text retrieval by multi-stream corpus alignment and dual softmax loss", "year": "2021" }, { "authors": "Mehdi Cherti; Romain Beaumont; Ross Wightman; Mitchell Wortsman; Gabriel Ilharco; Cade Gordon; Christoph Schuhmann; Ludwig Schmidt; Jenia Jitsev", "journal": "", "ref_id": "b8", "title": "Reproducible scaling laws for contrastive language-image learning", "year": "2023" }, { "authors": "Tri Dao; Dan Fu; Stefano Ermon; Atri Rudra; Christopher Ré", "journal": "NeurIPS", "ref_id": "b9", "title": "Flashattention: Fast and memory-efficient exact attention with io-awareness", "year": "2022" }, { "authors": "Mostafa Dehghani; Josip Djolonga; Basil Mustafa; Piotr Padlewski; Jonathan Heek; Justin Gilmer; Andreas Peter Steiner; Mathilde Caron; Robert Geirhos; Ibrahim Alabdulmohsin", "journal": "PMLR", "ref_id": "b10", "title": "Scaling vision transformers to 22 billion parameters", "year": "2023" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "ICLR", "ref_id": "b11", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Bo Fang; Wenhao Wu; Chang Liu; Yu Zhou; Yuxin Song; Weiping Wang; Xiangbo Shu; Xiangyang Ji; Jingdong Wang", "journal": "", "ref_id": "b12", "title": "Uatvr: Uncertainty-adaptive text-video retrieval", "year": "2023" }, { "authors": "Han Fang; Pengfei Xiong; Luhui Xu; Yu Chen", "journal": "", "ref_id": "b13", "title": "Clip2video: Mastering video-text retrieval via image clip", "year": "2021" }, { "authors": "Valentin Gabeur; Chen Sun; Karteek Alahari; Cordelia Schmid", "journal": "Springer", "ref_id": "b14", "title": "Multi-modal transformer for video retrieval", "year": "2020" }, { "authors": "Raghav Goyal; Samira Ebrahimi Kahou; Vincent Michalski; Joanna Materzynska; Susanne Westphal; Heuna Kim; Valentin Haenel; Ingo Fruend; Peter Yianilos; Moritz Mueller-Freitag", "journal": "", "ref_id": "b15", "title": "The\" something something\" video database for learning and evaluating visual common sense", "year": "2017" }, { "authors": "Neil Houlsby; Andrei Giurgiu; Stanislaw Jastrzebski; Bruna Morrone; Quentin De Laroussilhe; Andrea Gesmundo; Mona Attariyan; Sylvain Gelly", "journal": "PMLR", "ref_id": "b16", "title": "Parameter-efficient transfer learning for nlp", "year": "2019" }, { "authors": "J Edward; Phillip Hu; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen", "journal": "ICLR", "ref_id": "b17", "title": "Low-rank adaptation of large language models", "year": "2021" }, { "authors": "Sergey Ioffe; Christian Szegedy", "journal": "pmlr", "ref_id": "b18", "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "year": "2015" }, { "authors": "Haojun Jiang; Jianke Zhang; Rui Huang; Chunjiang Ge; Zanlin Ni; Jiwen Lu; Jie Zhou; Shiji Song; Gao Huang", "journal": "", "ref_id": "b19", "title": "Cross-modal adapter for text-video retrieval", "year": "2022" }, { "authors": "Chen Ju; Tengda Han; Kunhao Zheng; Ya Zhang; Weidi Xie", "journal": "Springer", "ref_id": "b20", "title": "Prompting visual-language models for efficient video understanding", "year": "2022" }, { "authors": "Will Kay; Joao Carreira; Karen Simonyan; Brian Zhang; Chloe Hillier; Sudheendra Vijayanarasimhan; Fabio Viola; Tim Green; Trevor Back; Paul Natsev", "journal": "", "ref_id": "b21", "title": "The kinetics human action video dataset", "year": "2017" }, { "authors": "Brian Lester; Rami Al-Rfou; Noah Constant", "journal": "", "ref_id": "b22", "title": "The power of scale for parameter-efficient prompt tuning", "year": "2021" }, { "authors": "Kunchang Li; Yali Wang; Yinan He; Yizhuo Li; Yi Wang; Limin Wang; Yu Qiao", "journal": "", "ref_id": "b23", "title": "Uniformerv2: Unlocking the potential of image vits for video understanding", "year": "2023" }, { "authors": "Ji Lin; Chuang Gan; Song Han", "journal": "", "ref_id": "b24", "title": "Tsm: Temporal shift module for efficient video understanding", "year": "2019" }, { "authors": "Ziyi Lin; Shijie Geng; Renrui Zhang; Peng Gao; Gerard De Melo; Xiaogang Wang; Jifeng Dai; Yu Qiao; Hongsheng Li", "journal": "Springer", "ref_id": "b25", "title": "Frozen clip models are efficient video learners", "year": "2022" }, { "authors": "Ruyang Liu; Jingjia Huang; Ge Li; Jiashi Feng; Xinglong Wu; Thomas H Li", "journal": "", "ref_id": "b26", "title": "Revisiting temporal modeling for clip-based image-to-video knowledge transferring", "year": "2023" }, { "authors": "Yuqi Liu; Pengfei Xiong; Luhui Xu; Shengming Cao; Qin Jin", "journal": "Springer", "ref_id": "b27", "title": "Ts2-net: Token shift and selection transformer for text-video retrieval", "year": "2022" }, { "authors": "Ze Liu; Jia Ning; Yue Cao; Yixuan Wei; Zheng Zhang; Stephen Lin; Han Hu", "journal": "", "ref_id": "b28", "title": "Video swin transformer", "year": "2022" }, { "authors": "Haoyu Lu; Mingyu Ding; Yuqi Huo; Guoxing Yang; Zhiwu Lu; Masayoshi Tomizuka; Wei Zhan", "journal": "", "ref_id": "b29", "title": "Uniadapter: Unified parameter-efficient transfer learning for cross-modal modeling", "year": "2023" }, { "authors": "Huaishao Luo; Lei Ji; Ming Zhong; Yang Chen; Wen Lei; Nan Duan; Tianrui Li", "journal": "Neurocomputing", "ref_id": "b30", "title": "Clip4clip: An empirical study of clip for end to end video clip retrieval and captioning", "year": "2022" }, { "authors": "Antoine Miech; Dimitri Zhukov; Jean-Baptiste Alayrac; Makarand Tapaswi; Ivan Laptev; Josef Sivic", "journal": "", "ref_id": "b31", "title": "Howto100m: Learning a text-video embedding by watching hundred million narrated video clips", "year": "2019" }, { "authors": "Bolin Ni; Houwen Peng; Minghao Chen; Songyang Zhang; Gaofeng Meng; Jianlong Fu; Shiming Xiang; Haibin Ling", "journal": "Springer", "ref_id": "b32", "title": "Expanding language-image pretrained models for general video recognition", "year": "2022" }, { "authors": "Junting Pan; Ziyi Lin; Xiatian Zhu; Jing Shao; Hongsheng Li", "journal": "NeurIPS", "ref_id": "b33", "title": "St-adapter: Parameter-efficient image-to-video transfer learning", "year": "2022" }, { "authors": "Zhiwu Qing; Shiwei Zhang; Ziyuan Huang; Yingya Zhang; Changxin Gao; Deli Zhao; Nong Sang", "journal": "", "ref_id": "b34", "title": "Disentangling spatial and temporal learning for efficient image-to-video transfer learning", "year": "2023" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b35", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "The Journal of Machine Learning Research", "ref_id": "b36", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Christoph Schuhmann; Romain Beaumont; Richard Vencu; Cade Gordon; Ross Wightman; Mehdi Cherti; Theo Coombes; Aarush Katta; Clayton Mullis; Mitchell Wortsman", "journal": "NeurIPS", "ref_id": "b37", "title": "Laion-5b: An open large-scale dataset for training next generation image-text models", "year": "2022" }, { "authors": "Quan Sun; Yuxin Fang; Ledell Wu; Xinlong Wang; Yue Cao", "journal": "", "ref_id": "b38", "title": "Eva-clip: Improved training techniques for clip at scale", "year": "2023" }, { "authors": "Yi-Lin Sung; Jaemin Cho; Mohit Bansal", "journal": "NeurIPS", "ref_id": "b39", "title": "Lst: Ladder side-tuning for parameter and memory efficient transfer learning", "year": "2022" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar", "journal": "", "ref_id": "b40", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Du Tran; Lubomir Bourdev; Rob Fergus; Lorenzo Torresani; Manohar Paluri", "journal": "", "ref_id": "b41", "title": "Learning spatiotemporal features with 3d convolutional networks", "year": "2015" }, { "authors": "Qiang Wang; Yanhao Zhang; Yun Zheng; Pan Pan; Xian-Sheng Hua", "journal": "", "ref_id": "b42", "title": "Disentangled representation learning for textvideo retrieval", "year": "2022" }, { "authors": "Xin Wang; Jiawei Wu; Junkun Chen; Lei Li; Yuan-Fang Wang; William Yang; Wang ", "journal": "", "ref_id": "b43", "title": "Vatex: A large-scale, highquality multilingual dataset for video-and-language research", "year": "2019" }, { "authors": "Wenhao Wu; Dongliang He; Tianwei Lin; Fu Li; Chuang Gan; Errui Ding", "journal": "", "ref_id": "b44", "title": "Mvfnet: Multi-view fusion network for efficient video recognition", "year": "2021" }, { "authors": "Wenhao Wu; Yuxiang Zhao; Yanwu Xu; Xiao Tan; Dongliang He; Zhikang Zou; Jin Ye; Yingying Li; Mingde Yao; Zichao Dong", "journal": "", "ref_id": "b45", "title": "Dsanet: Dynamic segment aggregation network for video-level representation learning", "year": "2021" }, { "authors": "Wenhao Wu; Haipeng Luo; Bo Fang; Jingdong Wang; Wanli Ouyang", "journal": "", "ref_id": "b46", "title": "Cap4video: What can auxiliary captions do for text-video retrieval?", "year": "2023" }, { "authors": "Wenhao Wu; Yuxin Song; Zhun Sun; Jingdong Wang; Chang Xu; Wanli Ouyang", "journal": "", "ref_id": "b47", "title": "What can simple arithmetic operations do for temporal modeling?", "year": "2023" }, { "authors": "Wenhao Wu; Zhun Sun; Wanli Ouyang", "journal": "", "ref_id": "b48", "title": "Revisiting classifier: Transferring vision-language models for video recognition", "year": "2023" }, { "authors": "Wenhao Wu; Zhun Sun; Yuxin Song; Jingdong Wang; Wanli Ouyang", "journal": "IJCV", "ref_id": "b49", "title": "Transferring vision-language models for visual recognition: A classifier perspective", "year": "2023" }, { "authors": "Wenhao Wu; Xiaohan Wang; Haipeng Luo; Jingdong Wang; Yi Yang; Wanli Ouyang", "journal": "", "ref_id": "b50", "title": "Bidirectional cross-modal knowledge exploration for video recognition with pretrained vision-language models", "year": "2023" }, { "authors": "Jun Xu; Tao Mei; Ting Yao; Yong Rui", "journal": "", "ref_id": "b51", "title": "Msr-vtt: A large video description dataset for bridging video and language", "year": "2016" }, { "authors": "Mengde Xu; Zheng Zhang; Fangyun Wei; Han Hu; Xiang Bai", "journal": "", "ref_id": "b52", "title": "Side adapter network for open-vocabulary semantic segmentation", "year": "2023" }, { "authors": "An Yang; Junshu Pan; Junyang Lin; Rui Men; Yichang Zhang; Jingren Zhou; Chang Zhou", "journal": "", "ref_id": "b53", "title": "Chinese clip: Contrastive vision-language pretraining in chinese", "year": "2022" }, { "authors": "Taojiannan Yang; Yi Zhu; Yusheng Xie; Aston Zhang; Chen Chen; Mu Li", "journal": "", "ref_id": "b54", "title": "Aim: Adapting image models for efficient video action recognition", "year": "2023" }, { "authors": "Lewei Yao; Runhui Huang; Lu Hou; Guansong Lu; Minzhe Niu; Hang Xu; Xiaodan Liang; Zhenguo Li; Xin Jiang; Chunjing Xu", "journal": "", "ref_id": "b55", "title": "Filip: Fine-grained interactive language-image pre-training", "year": "2021" }, { "authors": "Elad Ben Zaken; Yoav Goldberg; Shauli Ravfogel", "journal": "", "ref_id": "b56", "title": "Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models", "year": "2022" }, { "authors": "Xiaohua Zhai; Alexander Kolesnikov; Neil Houlsby; Lucas Beyer", "journal": "", "ref_id": "b57", "title": "Scaling vision transformers", "year": "2022" }, { "authors": "Hao Zhang; Yanbin Hao; Chong-Wah Ngo", "journal": "", "ref_id": "b58", "title": "Token shift transformer for video classification", "year": "2021" }, { "authors": "Alexander Jeffrey O Zhang; Amir Sax; Leonidas Zamir; Jitendra Guibas; Malik", "journal": "Springer", "ref_id": "b59", "title": "Side-tuning: a baseline for network adaptation via additive side networks", "year": "2020" } ]
[ { "formula_coordinates": [ 4, 107.51, 552.88, 178.85, 12.69 ], "formula_id": "formula_0", "formula_text": "z l out = Down(N orm(Z l out )),(1)" }, { "formula_coordinates": [ 4, 130.31, 575.65, 156.05, 12.84 ], "formula_id": "formula_1", "formula_text": "s l in = s l-1 out + z l out .(2)" }, { "formula_coordinates": [ 4, 374.03, 453.86, 171.08, 26.35 ], "formula_id": "formula_2", "formula_text": "s f inal = 1 T × N t,n s out .(3)" }, { "formula_coordinates": [ 4, 342.94, 646.64, 202.18, 26.35 ], "formula_id": "formula_3", "formula_text": "s f inal = P roj(U p( 1 N n s out ) + Z out ).(4)" } ]
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b15", "b18", "b55", "b56", "b65", "b7", "b19", "b23", "b28", "b17", "b51", "b9", "b75", "b21", "b68", "b16", "b20", "b34", "b64", "b67", "b8", "b49", "b70", "b3", "b10", "b5", "b47", "b12", "b30", "b53", "b66", "b44", "b43", "b0", "b52", "b48", "b54", "b35", "b1", "b40", "b26", "b32", "b22" ], "table_ref": [], "text": "Graphs serve as a ubiquitous representation for a diverse range of real-world data, encompassing domains such as social networks (Guo and Wang 2020;He et al. 2017;Wang et al. 2019), chemical molecules (Wang et al. 2022;Yang et al. 2023;Du et al. 2022), transportation systems (Hong et al. 2020;Jin et al. 2023), and recommender systems (Wu et al. 2022b), among many others. As the tailor-made designs, Graph neural networks (GNNs) (Kipf and Welling 2017;Hamilton, Ying, and Leskovec 2017;Velickovic et al. 2018;Dwivedi et al. 2020) have become a prevalent solution for machine learning tasks on graph-structured data, and have exhibited outstanding performance across a broad spectrum of graph-related applications (Zhou et al. 2020;Ji et al. 2021;Yue et al. 2020;Gupta, Matta, and Pant 2021).\nHowever, real-world scenarios often involve large-scale graphs with millions of nodes and edges (Hu et al. 2020;Li et al. 2020), which presents significant computational overheads when training GNNs (Xu et al. 2019;You et al. 2020;Duan et al. 2022). Worse still, fine-tuning hyperparameters and identifying suitable training schemes for self-supervised models can be both expensive and resource-intensive, particularly for large-scale graphs with dense connections. To this end, when a GNN is making a prediction, one naturally raises a question: is it possible to effectively simplify or reduce the graph to not only accelerate graph algorithms, including GNNs, but also aid in storage, visualization, and retrieval for associated graph data analysis tasks (Jin et al. 2022b,a;Toader et al. 2019;Zhang et al. 2021)?\nTo address this inefficiency, existing approaches typically fall into two research lines -graph sampling and graph distillation. Within the first class, many endeavors (Chen, Ma, and Xiao 2018;Eden et al. 2018;Chen et al. 2021;Sui et al. 2022;Gao and Ji 2019;Lee, Lee, and Kang 2019) have investigated the use of custom-built sampling approaches to reduce the computational footprint of GNNs (including some pruning methods). These methods aim to identify the discriminative edges or nodes to enhance training or inference efficiency. Nevertheless, sampling or pruning graph nodes or edges may cause massive information loss, resulting in performance collapse (Wu et al. 2022a;Wang et al. 2023). To this end, many studies focus on the graph distillation research line. In contrast to simplifying the graph structure or nodes, the second research line targets condensing the large original graph into a small, synthetic, and highly informative graph. The objective is to train GNNs on the condensed graph, such that their performance is comparable to those trained on the original graph (Ying et al. 2018;Roy et al. 2021;Ranjan, Sanyal, and Talukdar 2020;Jin et al. 2022b,a). It is worth emphasizing that there are fewer prior studies on pruning or compressing GNNs (Bahri, Bahl, and Zafeiriou 2021;Wang et al. 2021;Tailor, Fernandez-Marques, and Lane 2020), which can bring the salient benefits for powerefficient graph representation learning. However, they cannot be extracted and modeled in data-level optimization, which goes out of the scope of our work. Generally, graph distillation draws inspiration from data distillation techniques (Wang et al. 2018;Nguyen et al. 2021;Liu et al. 2022;Bohdal, Yang, and Hospedales 2020;Nguyen, Chen, and Lee 2021) and aims to ensure consistency between raw and synthetic datasets by constraining the soft labels across both sets. Recently, some trajectory matching algorithms show great prominence in image (Kim et al. 2022;Lee et al. 2022;Jiang et al. 2022) and graph realms (Jin et al. 2022b,a). Concretely, these frameworks adopt parameter or gradient matching scheme w.r.t the condensed set and raw data during the training process. Though promising, enforcing gradient matching results in an inelastic compression process, as shown in Fig. 1 (Left). When we summarize a paper into an abstract, we can easily see that replacing some words does not change the meaning of abstract. The appearance of these synonyms in the synthetic data set will not change the meaning, but the trajectory matching expects that the synthetic set is completely consistent with the original set under each step of the training process.\nResearch gap. Given a more intuitive instance (Fig. 1 (Middle)), graph trajectory matching algorithms enforce gradient consistency and provide a rigid parameter space that limits the flexibility during the training. In fact, these methods may not explore the impact of certain parameters that are similar in the vicinity of the optimal matching point.\nThis paper targets at overcoming this tricky hurdle and explores a more robust graph condensation framework. We propose a graph robust condensation algorithm, (GroC), a principled adversarial training (bi-level optimization) framework that explores the neighborhood space of the parameters that have the greatest impact on the original matching process. To achieve this, we painstakingly design a Shock Absorber operator to attach perturbations of adversarial training in specific positioned location of the synthetic dataset. To highlight, the optimization process of our GroC is: (i) robust, as the training compression process is more stable and robust, which can better find a compressed subset (see the example in Fig. 1 Right); (ii) one-stop-shop, since it is completely free of human labor of trial-and-error on perturbations and locations choices. (iii) time-efficient, through free training algorithm, we parallelize the entire process of optimizing adversarial perturbations and synthesizing datasets, ensuring that virtually no additional time overhead is introduced.\nContributions. Our contributions can be summarized as " }, { "figure_ref": [], "heading": "Preliminaries & Related Work", "publication_ref": [ "b28", "b17", "b51", "b9", "b63", "b3", "b10", "b5", "b47", "b12", "b30", "b66", "b44", "b43", "b54", "b73", "b2", "b14", "b37", "b42", "b36", "b50", "b36" ], "table_ref": [], "text": "Graph Neural Networks (GNNs). Graph neural networks (GNNs) (Kipf and Welling 2017;Hamilton, Ying, and Leskovec 2017;Velickovic et al. 2018;Dwivedi et al. 2020;Wu et al. 2020) are capable of processing variable-sized, permutation-invariant graphs. They learn low-dimensional representations through an iterative process that involves transferring and aggregating the representations from topological neighbors. Although GNNs have shown promising results, they face significant inefficiencies when scaling up to large or dense graphs. Towards this end, several research streams have focused on addressing this issue, such as graph sampling and graph distillation.\nGraph Sampling & Distillation. Graph sampling reduces the computational burden of GNNs by selectively sampling sub-graphs or applying pruning methods (Chen, Ma, and Xiao 2018;Eden et al. 2018;Chen et al. 2021;Sui et al. 2022;Gao and Ji 2019;Lee, Lee, and Kang 2019). However, the aggressive sampling strategy may lead to a significant loss of information, potentially reducing the representation ability of the sampled subset. To address this, graph distillation research line (Ying et al. 2018;Roy et al. 2021;Ranjan, Sanyal, and Talukdar 2020) draws inspiration from dataset distillation (DD), which aims to distill (compress) the knowledge embedded in raw data into synthetic data, ensuring that models trained on this synthetic data maintain performance (Wang et al. 2018;Zhao and Bilen 2023;Cazenavette et al. 2022;Nguyen et al. 2021). Remarkably, (Jin et al. 2022b,a) take the first step to propose optimizing both nodes and edges in the graph by employing training gradient matching, which under the spotlight of our research.\nAdversarial training & Robustness. Adversarial training was introduced as a defense mechanism against adversarial attacks, where a model is trained not only on the clean data but also on adversarial samples generated during training (Goodfellow, Shlens, and Szegedy 2015;Moosavi-Dezfooli, Fawzi, and Frossard 2016). They demonstrated that adversarial training can make deep neural networks more robust.\nBuild upon these observations, many subsequent studies pay attention to design different adversarial examples (Papernot, McDaniel, and Goodfellow 2016;Madry et al. 2018;Chen et al. 2018;Tramèr et al. 2017). In our work, we introduce existing methods to the inelasticity of synthetic data. Following the Projected Gradient Descent (PGD) (Madry et al. 2018), performs iterative gradient descent with backtracking to generate adversarial perturbations." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "As shown in Fig. 2, in this section, we devote to explaining how our GroC framework deepens and enhances the robustness in graph condensation task. Going beyond this, we take a comprehensive look at our key operator Shock Absorber.\nFinally, we introduce the free training for the efficient implementation GroC. In the following parts, we will delineate our model components by introducing Fig. 2 from left to right.\nFor ease of understanding, we summarize all notations and depict our algorithm in Appendix, which can be checked in supplemental materials." }, { "figure_ref": [ "fig_1" ], "heading": "Graph Condensation via Gradient Matching", "publication_ref": [ "b71", "b74" ], "table_ref": [], "text": "In this work, we first review the process of graph condensation and start from the original graph T = (A, X, Y), where A ∈ R N ×N is the adjacency matrix, N is the number of nodes, X ∈ R N ×d is the d-dimensional node feature attributes. Similar to traditional graph condensation task (Jin et al. 2022b,a), we note the label of nodes as Y = {0, 1, . . . , C -1} N denotes the node labels over C classes. Our work target to train a synthetic graph S\n= (A ′ , X ′ , Y ′ ) with adjacency matrix A ′ ∈ R N ′ ×N ′\nand feature attributes X ′ ∈ R N ′ ×D . We designate GroC, to obtain a synthetic graph with N ′ ≪ N , which satisfies that a general GNN trained on S can reach commensurate accuracy on large graph T .\nGradient Matching as the Objective. As our goal is to learn highly informative synthetic graphs, one prominent approach is to enable GNNs trained on synthetic graphs to mimic the training trajectory on the original large data. To achieve this goal, dataset condensation (Zhao and Bilen 2021;Zhao, Mopuri, and Bilen 2020;Jin et al. 2022b,a) introduces a gradient matching scheme. More specifically, it tries to minimize the discrepancy between the gradients of the model with respect to the parameters, as computed on a large-real data T and a small-synthetic data S, at each training step. Therefore, the parameters of the model trained on synthetic data will closely resemble those trained on real data at every training steps. We first formalize the problem as:\nmin S L (GNN θ S (A, X) , Y) s.t. θS = arg min θ L(GNN θ (A ′ , X ′ ), Y ′ ) (1)\nwhere GNN θ stands for the GNN initizalization with θ, L represents the loss function. θ S denotes that model parameters trained on S. The labels of the synthetic graph are pre-defined. First, we generate a specific number of labels, ensuring an equal number of labels per class. Then, we randomly select the corresponding nodes from the original dataset to serve as the initial features for each class in the synthetic dataset. Following the (Jin et al. 2022b), we employ multiple initializations to mitigate the risk of over-fitting. For ease of understanding, we describe the gradient matching procedure based on a single initialization in the following part, unless otherwise specified.\nmin S T t=0 D ∇ θ ℓ S t (f θ t (St), Y ′ ), ∇ θ ℓ T t (f θ t (T ), Y) s.t. θt+1 = opt(θt, St)(2)\nIn Eq. 2, D(•, •) is a distance function, f θt denotes the GNN model parameterized with θ at time point t. S t is the synthetic data of the t-th iteration of optimization. T represents the total number of steps in the training process, and opt(•, •) is the optimization operator used for updating the parameters θ. This equation represents a bi-level problem, where we learn the synthetic graphs S in the outer optimization loop, and update the model parameters θ t in the inner optimization loop. ℓ S t and ℓ T t are the negative log-likelihood loss of the synthetic and original datasets, respectively. We conduct gradient matching process at different time points and our parameter update forms of two datasets can be written as:\nθ S t+1 = opt θ ℓ S t GNN θ S t A ′ , X ′ , Y ′ θ T t+1 = opt θ ℓ T t GNN θ T t (A, X) , Y(3)\nHere we use backpropagation to update the model parameters θ S t+1 and θ T t+1 , and we proceed to consider to match the gradient distance of two graph set. Similar to (Jin et al. 2022b), we define distance D as the sum of the distances dis at each layer. For a specific layer, given two GNN model gradients G S ∈ R d1×d2 and G T ∈ R d1×d2 , the distance dis(•, •) used for condensation is defined as follows.\ndis(G S , G T ) = d2 i=1 (1 - G S i • G T i ∥G S i ∥ ∥G T i ∥ )(4)\nIn Eq. 4, G S and G T represent the i-th column vectors of the gradient matrices. By employing these formulations, we can efficiently achieve gradient matching strategy. However, traditional methods are notably unstable, and an excessive emphasis on gradient consistency can lead to synthesized graphs often lacking the desired generalization capability. A promising approach is to introduce adversarial training, enabling the model to explore a wider latent space during its training process. To this end, we introduce adversarial training into the graph condensation research line for the first time, setting the context for our investigation.\nIn the following parts, for the sake of convenience, we will draw on the concept of left and right limits from mathematical notation. We denote the superscript t + for the right limit and the superscript t -for the left limit 1 . Fig. 2 showcases our GroC algorithms, in the initial stage, we do not update the GNN parameters. Instead, we optimize the trainable synthesized graph (outer loop). We refer to this particular time as the left limit of t, e.g., t -, in which we optimally update the synthetic dataset S t-1 → S t using the gradient computed by the GNN:\nX ′ ← X ′ -η1∇ X ′ D ′ if t % (ω1 + ω2) < ω1(5)\nIn Eq. 5, here D ′ is the updated distance of the two datasets, we leverage the gradient distances to propagate optimized features of the synthetic dataset in the backpropagation fashion. However, frequently matching gradients may lead the whole optimization process notoriously time-consuming. As a tradeoff, we match the gradient at regular intervals of ω 1 + ω 2 periodically. Concretely, in every ω 1 + ω 2 epoch, we match gradient ω 1 times for optimizing feature attributes and the next ω 2 epoch we only update adjacency matrix A ′ :\ng ϕ ← g ϕ -η2∇ ϕ D ′ ⇒ A ′ = g ϕ X ′ with A ′ ij = σ((MLP ϕ ([X ′ i ; X ′ j ]) + MLP ϕ ([X ′ j ; X ′ i ]))/2) (6)\nHere g ϕ denotes the MLP parameterized with ϕ. We generate adjacency matrix by controlling synthetic feature through g ϕ .\nThen we use hyper-parameter ρ to control the sparsity of the adjacency matrix." }, { "figure_ref": [ "fig_2" ], "heading": "Roust Learning via Shock Absorber", "publication_ref": [ "b36", "b29", "b36" ], "table_ref": [], "text": "Min-Max (Adversarial) Optimization. Starting from the conclusion of gradient matching at time point t -, we introduce our shock absorber operator to enhance the gradient matching process and thereby expand the optimization space for exploration at time point t. We propose to regularly and automatically learn to add a perturbation δ (generated by adversarial training and refer to shock absorber) on attributes of synthetic graph S t . Further, we update our adversarial training framework via the following min-max optimization framework:\n1 Assuming that t + is the right limit for t, for any ς > 0 satisfies that t + -t > ς, and we can draw the similar conclusion in left limit t -: t -t -> ς.\nmin θ t+1 E θ 0 ∼P θ 0 max θ * t ,∥δ∥ p ≤ε D ∇ θ t+1 ℓ S t , ∇ θ t+1 ℓ T t ℓ S t := ℓ S t f θ * t (St + δγ) ℓ T t := ℓ T t f θ * t (T ), Y(7)\nwhere f θ * t represents the GNN model parameterized with the fixed optimal θ * at the t-th iteration, ∥•∥ p is some ℓ p -norm distance metric, ε is the perturbation budget, and D is the distance function which comes from Eq. 4. E θ0∼P θ 0 denotes that multiple times initialization (satisfies P θ0 distribution) and calculating expectation. Thoroughly achieving Eq. 7 need to find a temporary and intrusive variable, e.g., Shock Absorber, which can help to explore the gradients field of the synthetic datasets as much as possible with limited scope. Towards this end, we resort to previous research (Madry et al. 2018), which has demonstrated that the saddle-point optimization problem of Eq 7 can be effectively solved using Stochastic Gradient Descent (SGD) for the outer minimization and Projected Gradient Descent (PGD) for the inner maximization. Similarly, the approximation of the inner maximization under an l ∞ constraints is as follows:\nδγ+1 = Π ||δ||∞≤ε δγ + α • D ∇ θ * t ℓ S t , ∇ θ * t ℓ T t (8)\nwhere the perturbation δ is updated iteratively for M round, and the function Π δ∞≤ε performs projection onto the ε-ball in the l ∞ -norm. Compared to the traditional adversarial attack or training algorithms (Kong et al. 2022;Madry et al. 2018), We removed the sign function, as we aim for updates to be within a more granular range, the diversified perturbations can ensure the process is more robust.\nWe iteratively update M times to generate the perturbations as depicted in Eq. 8, this process requires M end-to-end forward and backward passes. For ease of understanding, we illustrate the process through which the shock absorber operates in Fig. 3. In M rounds updating, we iteratively fuse the perturbations with synthetic graph.\nIn this fashion, the most severe perturbations δ M are applied to the input features, upon which the model weights are optimized. It is worth emphasizing that we have no parameter update process in this procedure, we preserve the parameter gradient throughout the entire M iterations and subsequently eliminate this perturbation after M rounds of Shock Absorber influence. In next time point (t + ), we use the following function to proceed to update the synthetic dataset:\nD S t + = 1 M M γ=1 D γ t ∇ θ * t ℓ S t , ∇ θ * t ℓ T t(9)\nHere D γ t denotes distance at γ round at t points. D S t + represents the distance after attaching M round shock absorber. At t + time point, we only use the average gradient values to update the synthetic datasets." }, { "figure_ref": [ "fig_2" ], "heading": "A time-efficient version of GroC", "publication_ref": [ "b46" ], "table_ref": [], "text": "To better generalize to large datasets and reduce the computational complexity, we provide a time-efficient version of GroC called TimGroC. Compared to GroC, TimGroC achieves a significantly efficient time benefit, enhancing model robustness with almost no additional time overhead. In the implementation of TimGroC, we removed the training loop that updates the adversarial perturbation during adversarial training optimization (i.e., as illustrated in Fig 3). This allows the M iterations at time t + to be integrated into the outer loop for optimizing the synthesized dataset. Specifically, the adversarial perturbation is set as a persistent variable and added to the synthesized data for gradient matching. This process involves both forward and backward passes, simultaneously obtaining gradients for the synthesized dataset and the shock absorber. This process can be understood as free adversarial training (Shafahi et al. 2019). Based on this, we reap the benefits brought by adversarial training, with virtually no additional time cost introduced." }, { "figure_ref": [], "heading": "Gradient Locating in Synthesized Data", "publication_ref": [], "table_ref": [], "text": "In this work, we focus on using the shock absorber for helping the graph data condensation process be more robust. However, employing excessively large perturbations may diminish the expressive power of the entire synthetic dataset. Therefore, we selectively apply perturbations solely to the most vulnerable portion of the synthetic dataset. We refer to this process as gradient localization, as it involves identifying the optimal location for applying perturbations. Concretely, we perform element-wise multiplication between a differentiable all-ones matrix m δ and the perturbations δ. The purpose of operation is to incorporate the all-ones matrix into the optimization computation. Note that the shape of δ is identical to that of the synthetic data S.\nR = ∇m δ D ∇ θ ℓ(f θ t (S + δ ⊙ m δ ⊙ mg), Y ′ ), ∇ θ ℓ(f θ t (T ), Y) (10\n)\nwhere R is the absolute value of the gradient of the m δ . The adversarial training is performed M times backward pass and forward pass, the noise δ 0 is uniform noise, m g is given an initial value of all one matrix. Through the above gradient information, we use topk algorithm to get position m g :\nm g (i,j) = 1, if R i,j are the top-k largest entries 0, otherwise(11)\nThe m g obtained by the gradient in the previous round acts on the noise δ γ+1 of the second round\nδ γ = δ ′ γ ⊙ m δ ⊙ m g,γ(12)\nwhere the m g,0 is the all-ones matrix, after computing the gradient information for the position matrix m δ at the first step, it becomes a sparse matrix with only 0 and 1 entries. Therefore, after incorporating our method into Eq. 3, it can be rewritten as:\nmin S T -1 t=0 D ∇ θ ℓ(f θ t (S + δ ⊙ m δ ⊙ mg), Y ′ ), ∇ θ ℓ(f θ t (T ), Y)(13)" }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "We present empirical results to demonstrate the effectiveness of our proposed methods GroC and TimGroC. The experiments aim to answer the following research questions:\n• RQ1. How is the evaluation quality of GroC and TimGroC compared to that of existing SOTAs? • RQ2. How effective is shock absorber? • RQ3. What is the time overhead of our model? • RQ4. Does the graph compressed by our model exhibit transferability across backbones?" }, { "figure_ref": [], "heading": "Experimental Setting", "publication_ref": [ "b28", "b20", "b69", "b17", "b20", "b57", "b57", "b11", "b45", "b28", "b58" ], "table_ref": [], "text": "Datasets & Baselines. We conduct experiments on three transductive datasets, i.e., Citeseer, Cora (Kipf and Welling 2017) and Ogbn-arxiv (Hu et al. 2020) and on the inductive dataset, i.e., Flickr (Zeng et al. 2020) and Reddit (Hamilton, Ying, and Leskovec 2017). For the datasets setting, we follow the setup in (Jin et al. 2022b). In addition, we also examine the transfer ability of our Shock Absorber on the graph classification task. We utilize the Ogbg-molhiv molecular dataset from Open Graph Benchmark (OGB) (Hu et al. 2020) and TUDatasets (DD and NCI1) (Morris et al. 2020a) for graphlevel property classification. On node classification datasets, We compare our method with one state-of-the-art condensation method and three coreset methods. (i) GCond (Jin et al. 2022b) models the condensed graph structure based on the condensed node features. (ii) Random coreset (Welling 2009) randomly selects nodes for graph sampling. (iii) The Herding coreset (Welling 2009) is often used in continual learning to select samples closest to the cluster center.(iv) The K-Center method (Farahani and Hekmatfar 2009;Sener and Savarese 2018) minimizes the maximum distance between a sample and its nearest center to select center samples. For the four baselines: Random, Herding, K-Center, and GCond, we use the implementations from (Jin et al. 2022b).\nEvaluation. We train our method and SOTAs with the same settings, including learning rate, optimizer, etc. Firstly, we create three condensed graphs by training methods with different seeds. Then, we train a GNN on each graph, repeating the process three times. To assess condensed graph information, we train GNN classifiers and evaluate on real graph test nodes or graphs. By comparing model performance on real graphs, we obtain condensed graph informativeness and effectiveness. Experiments are repeated 3 times, reporting average performance and variance.\nBackbones. To ensure fairness, we utilize the identical model as GCond (Jin et al. 2022b), specifically GCN (Kipf and Welling 2017), for evaluation. In the condensation process, we apply SGC (Wu et al. 2019), configuring it as a 2-layer model with 256 hidden units." }, { "figure_ref": [], "heading": "Main Results (RQ1)", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "In this part, we thoroughly investigate the performance of the GroC and TimGroC across various datasets. We conduct a comprehensive comparison of our frameworks with Random, Herding, K-Center, and GCond, for node classification tasks on the Cora, Citeseer, Ogbn-arxiv and Flickr datasets. In Tab 1, we present a comparison of sparsity performance and other related parameters between our model and the current state-of-the-art models. In Tab 2, we further extend the comparison by including several clustering/compression algorithms. The observations (obs) can be listed as follows.\n• Obs 1: GroC/TimGroC consistently outperform GCond under extremely large condensation rates, verifying its extraordinary performance. For instance, on the Citeseer dataset, our model can achieve a compression rate under 0.003, which is nearly 5.2% higher than the current SOTA GCond. These results demonstrate the significance of adversarial training to graph condensation (Tab. 1). Interestingly, from our visualization results (Fig 4) and Table 1, it can be observed that the graphs we compressed are highly dense, with edges acting as dense information carriers, which facilitates information storage.\n• Obs 2: GroC demonstrates superior performance and lower variance, which attests to the robustness of our algorithm. For instance, in Tab 2, both GroC and Tim-GroC achieved nearly the lowest variance and the highest performance. On the Citeseer dataset, they show an improvement of almost 5.2% accompanied by a decline in variance of nearly 2.0%. Meanwhile, on the Reddit, our framework exhibits a variance of only 0.05%, which is significantly lower than other models, further corroborating the robustness of our GroC and TimGroC frameworks. " }, { "figure_ref": [], "heading": "Scability of Shock Absorber (RQ2)", "publication_ref": [ "b20", "b39" ], "table_ref": [], "text": "Additionally, we evaluate Shock Absorber and DoSCond on the graph classification task using the Ogbg-molhiv (Hu et al. 2020), DD, and NCII datasets (Morris et al. 2020b). This comparative analysis allows us to assess the effectiveness and superiority of the shock absorber method in different scenarios. Following the methodology of DoSCond (Jin et al. 2022a), a condensation method for graph classification, we generated one condensed graph for each class. In the training phase of condensed graphs, we integrated our proposed shock absorber into the one-step gradient matching process.\nFor this purpose, we employed a 3-layer GCN for gradient matching. During the test phase, we used a GCN with the same architecture and trained the model on condensed graphs for 500 epochs with learning rate 0.001. " }, { "figure_ref": [], "heading": "Study of time consumption (RQ3)", "publication_ref": [], "table_ref": [], "text": "In this subsection, we examine the time cost of our algorithm through experiments to further assess whether our model introduces excessive time overhead while enhancing robustness (3070 GPU). Since our aim is to enhance robustness, our model incorporates an adversarial training process. We observed that the model achieves optimal performance when M is between 3 and 4. Consequently, we adopted the more efficient M = 3 as the setting for GroC to compare time efficiency. We discovered that TimGroC, compared to GroC, achieves an improvement ranging from 3.19 ∼ 4.11 times while maintaining optimal performance. This further substantiates that our algorithm enhances robustness without introducing additional computational power." }, { "figure_ref": [], "heading": "Study of transferability (RQ4)", "publication_ref": [], "table_ref": [], "text": "Lastly, we employed GCN as the training backbone and trained the synthesized smaller graph on this new backbone to evaluate its transferability. As shown in Tab 5, we choose Cora and Citeseer as benchmarks and follow the reduction ratio of (Jin et al. 2022b), we can easily observe that our synthesized data also achieves commendable performance on GraphSAGE, SGC, and MLP. This further attests to the ex- " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this study, we introduce GroC, a robust adversarial trainingbased graph condensation framework. GroC leverages principled adversarial training (min-max optimization) to explore the parameter space surrounding the influential parameters in the original matching process. Based on this, we further introduce the shock absorber operator, which enhances the gradient matching process and maximizes the exploration of synthetic dataset gradients within a limited scope. Our experimental results demonstrate that our approach surpasses other SOTA methods in terms of accuracy and efficiency across multiple node classification datasets and graph classification datasets. The evaluation highlights that our condensed graphs effectively retain important structural properties of the original graphs while significantly reducing dimensionality and computational complexity." }, { "figure_ref": [], "heading": "Notations", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Notation Definition", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C", "publication_ref": [], "table_ref": [], "text": "The number of classes of the raw graph K\nThe initialization times M\nThe number of adversarial perturbation optimizations T\nThe training epochs T\nThe distance function for computing gradient loss ρ\nThe hyperparameter to control the sparsity of adjacency matrix." }, { "figure_ref": [], "heading": "Algorithm of our GroC method", "publication_ref": [], "table_ref": [], "text": "Algorithm 1: The framework of our proposed GroC.\n1: Input: Training data Graph T = (A, X, Y), pre-defined labels Y ′ for condensed synthetic graph. Hyperparameters for control the progress of synthesized data ω 1 and ω 2 . Learning rates η 1 , η 2 , α. ρ is the hyperparameter to control the sparsity of the synthesized data adjacency matrix. 2: Randomly selecting node features of raw data to construct synthetic node attributes (one class chooses one node) X ′ . 3: Initialize δ 0 as uniform noise 4: Initialize m δ as a differentiable matrix of all ones 5: Initialize m g as a matrix of all ones 6: Initialize γ = 0 7: for k = 0, ..., K -1 do \nend for 24:\nCompute Shock Absorber\nend if" }, { "figure_ref": [], "heading": "32:", "publication_ref": [], "table_ref": [], "text": "Update θ t+1 ← opt θ (θ t , S, τ θ )\n33:\nend for 34: end for 35: A ′ = g ϕ (X ′ ) 36:" } ]
In this paper, we study the graph condensation problem by compressing the large, complex graph into a concise, synthetic representation that preserves the most essential and discriminative information of structure and features. We seminally propose the concept of Shock Absorber (a type of perturbation) that enhances the robustness and stability of the original graphs against changes in an adversarial training fashion. Concretely, (I) we forcibly match the gradients between preselected graph neural networks (GNNs) trained on a synthetic, simplified graph and the original training graph at regularly spaced intervals. (II) Before each update synthetic graph point, a Shock Absorber serves as a gradient attacker to maximize the distance between the synthetic dataset and the original graph by selectively perturbing the parts that are underrepresented or insufficiently informative. We iteratively repeat the above two processes (I and II) in an adversarial training fashion to maintain the highly-informative context without losing correlation with the original dataset. More importantly, our shock absorber and the synthesized graph parallelly share the backward process in a free training manner. Compared to the original adversarial training, it introduces almost no additional time overhead. We validate our framework across 8 datasets (3 graph and 5 node classification datasets) and achieve prominent results: for example, on Cora, Citeseer and Ogbn-Arxiv, we can gain nearly 1.13% ∼ 5.03% improvements compare with SOTA models. Moreover, our algorithm adds only about 0.2% to 2.2% additional time overhead over Flicker, Citeseer and Ogbn-Arxiv. Compared to the general adversarial training, our approach improves time efficiency by nearly 4-fold. The code is available in the supplementary material.
Attend Who is Weak: Enhancing Graph Condensation via Cross-Free Adversarial Training
[ { "figure_caption": "Figure 1 :1Figure 1: Left: The motivation of our proposal. Middle: Traditional gradient matching can find a synthetic dataset with the closest parameters to the original dataset, near the red point, and the synthesized data (yellow point) to be close to the original dataset. However, the original gradient matching algorithm cannot effectively explore this neighborhood, resulting in a lack of resilience and robustness in the training process. Right: The training example of our proposal on Citeseer dataset (compared with SOTA model GCond).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Illustration of our GroC framework.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: An illustration of working process of Shock Absorber", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Visualization of condensed graphs. The learned condensed graphs are weighted graphs, The color depth of the edges varies with the weight.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "•Obs 3: GroC exhibits stronger training robustness. GroC exhibits stronger training robustness. In Fig 5, it is readily apparent that, during the training phase, GroC's curve is predominantly above that of GCond, demonstrating significant potential. Particularly on the Citeseer and Ogbn-Arixv datasets, our model excels and surpasses the current best models, still possessing the capability for further improvement towards the end of the training.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Performance during training. The test performances (accuracy) on Citeseer, Cora, Ogbn-arxiv and Flickr, Reddit.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "follows:(1) We propose a robust adversarial training-based graph condensation framework called GroC for more effectively learning the robust representation space of the synthetic data, by attaching perturbations on the synthetic graph during the gradient matching process. (2) Shock Absorber operator can not only help our model achieve prominent performances, but also serve as a general operator for other compression frameworks. (3) Building on our insights, we train our framework on graph/node classification tasks. Stunningly, our model can yield SOTA results on various graph benchmarks, e.g., for example, on Cora, Citeseer and Ogbn-Arxiv, we can gain nearly 1.13% ∼ 5.03% improvements compare with SOTA models under a smaller variance. Moreover, our algorithm adds only about 0.2% to 2.2% additional time overhead over Flicker, Citeseer and Ogbn-Arxiv. These results empirically demonstrate the effectiveness and robustness of our proposal.", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison between condensed graphs and original graphs. The condensed graphs have fewer nodes and are more dense.", "figure_data": "Citeseer, r=0.3%Cora, r=0.4%ogbn-arxiv, r=0.05%Flickr, r=0.1%GroC/TimGroCGCondOriginal GroC/TimGroCGCondOriginal GroC/TimGroCGCondOriginal GroC/TimGroCGCondOriginalAccuracy (%)67.82/69.1664.1371.1280.91/82.0280.6180.9158.46/57.6256.4970.7646.95/47.0146.9347.16Nodes6633277727089090169343444444625Edges1515473221215429388039551166243946946218140Sparsity83.33 %83.33 %0.09 %85.71 %85.71 %0.15 %95.80 %97.65 %0.01 %97.73 %97.73 %0.02 %Storage0.085 MB0.087 MB 47.1 MB0.041 MB0.041 MB 14.9 MB0.078 MB0.078 MB 100.4 MB0.094 MB0.094 MB 86.8 MB(a) Citeseer, r=0.3%(b) Cora, r=0.4%(c) Ogbn-arxiv, r=0.05%(d) Flickr, r=0.1%(d) Reddit, r=0.1%", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "The results of the classification accuracy are summarized in Tab. 3. Scability in Graph Classification. As shown in Tab. 3, in the graph classification scenario, our GroC shows a decent generalization. Its effectiveness can be attributed to the adversarial perturbation which improves the robustness during the graph condensation process. With the gradient localization in", "figure_data": "", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Shock Absorber without gradient location (GL) and Shock Absorber achieves promising performance in comparison to baselines even with extremely large condensation rates. We choose GCN as backbones and report the mean and standard deviation of the results of transductive performance on Citeseer, Cora, Ogbn-arxiv. Performance is reported as test accuracy (%).", "figure_data": "Dataset RatioBaselinesAblationOursRandom Herding K-Center GCondw/o GLGroCTimGroC Full graphCiteseer 0.3% 33.87±0.82 31.31±1.20 34.03±2.52 64.13±1.83 63.98±4.31 67.82±1.31 69.16±2.00 71.12±0.06Cora0.4% 37.04±7.41 43.47±0.55 46.33±3.24 80.61±0.31 80.43±0.45 80.91±0.39 82.02±0.42 80.91±0.10Ogbn-arxiv 0.05% 46.83±2.60 49.74±2.30 47.28±1.15 56.49±1.69 57.39±0.65 58.46±0.85 57.62±0.64 70.76±0.04Flickr0.1% 41.84±1.87 43.90±0.56 43.30±0.90 46.93±0.10 46.81±0.10 46.95±0.03 47.01±0.10 47.16±0.17Reddit0.1% 59.14±2.26 65.75±1.28 53.05±2.73 89.34±0.54 89.56±0.74 89.56±0.45 90.24±0.05 93.96±0.03", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The shock absorber, and without gradient location (GL) compared to DosCond. We report the accuracy for the first two datasets and ROC-AUC for the last dataset. We record the original graph performance of three graph as 78.81, 70.88 and 74.17.", "figure_data": "DatasetRatio DoSCond w/o GL Shock AbsorberDD0.2% 72.31±3.81 73.73±1.0674.29±1.06NCI10.1% 57.58±0.13 57.93±0.1458.20±0.39Ogbg-molhiv 0.01% 73.81±0.11 74.08±0.3474.17±0.10synthesized data, condensed graphs contain more effectiveinformation which is beneficial for model training. Moreover,our experiments on graph-level property classification havedemonstrated superior interpretability and generalization abil-ity for graph classification, surpassing leading baselines.", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparing the time consumption of (Tim)GroC and GCond. All results are in seconds should be multiplied by 100.", "figure_data": "Dataset Ratio GCond TimGroC w/o GL GroC TimGroCCora0.4% 29.9630.04100.44 30.09Citeseer 0.3% 29.7730.11102.37 30.40Ogbn-arxiv 0.05% 182.34183.20582.37 182.47Flicker 0.1% 8.508.5134.268.54Reddit 0.1% 51.4452.01214.82 52.27cellent transferability of our compression algorithm, offeringa reliable solution for future data compression.", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Transferability of our framework, here we choose TimGroC as compressed algorithm. Cora 2.6% 80.16±0.98 77.36±1.13 74.16±6.07 79.24±1.74 78.15±0.73 75.07±5.29 Citeseer 1.8% 71.10±0.61 66.87±1.32 59.32±4.15 72.21±0.35 71.51±0.68 65.31±3.59", "figure_data": "Method RatioBaselineTimGroCGraphSAGESGCMLPGraphSAGESGCMLP", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" } ]
Xinglin Li; Kun Wang; Hanhui Deng; Yuxuan Liang; Di Wu
[ { "authors": "M Bahri; G Bahl; S Zafeiriou", "journal": "", "ref_id": "b0", "title": "Binary graph neural networks", "year": "2021" }, { "authors": "O Bohdal; Y Yang; T Hospedales", "journal": "", "ref_id": "b1", "title": "Flexible dataset distillation: Learn labels instead of images", "year": "2020" }, { "authors": "G Cazenavette; T Wang; A Torralba; A A Efros; J Zhu", "journal": "", "ref_id": "b2", "title": "Dataset Distillation by Matching Training Trajectories", "year": "2022" }, { "authors": "J Chen; T Ma; C Xiao", "journal": "", "ref_id": "b3", "title": "Fastgcn: fast learning with graph convolutional networks via importance sampling", "year": "2018" }, { "authors": "J Chen; Y Wu; X Xu; Y Chen; H Zheng; Q Xuan", "journal": "", "ref_id": "b4", "title": "Fast gradient attack on network embedding", "year": "2018" }, { "authors": "T Chen; Y Sui; X Chen; A Zhang; Z Wang", "journal": "", "ref_id": "b5", "title": "A unified lottery ticket hypothesis for graph neural networks", "year": "2021" }, { "authors": " Pmlr", "journal": "", "ref_id": "b6", "title": "", "year": "" }, { "authors": "W Du; X Yang; D Wu; F Ma; B Zhang; C Bao; Y Huo; J Jiang; X Chen; Y Wang", "journal": "Briefings in Bioinformatics", "ref_id": "b7", "title": "Fusing 2D and 3D molecular graphs as unambiguous molecular descriptors for conformational and chiral stereoisomers", "year": "2022" }, { "authors": "K Duan; Z Liu; P Wang; W Zheng; K Zhou; T Chen; X Hu; Z Wang", "journal": "", "ref_id": "b8", "title": "A comprehensive study on largescale graph training: Benchmarking and rethinking", "year": "2022" }, { "authors": "V P Dwivedi; C K Joshi; T Laurent; Y Bengio; X Bresson", "journal": "", "ref_id": "b9", "title": "Benchmarking Graph Neural Networks", "year": "2020" }, { "authors": "T Eden; S Jain; A Pinar; D Ron; C Seshadhri", "journal": "", "ref_id": "b10", "title": "Provable and practical approximations for the degree distribution using sublinear graph samples", "year": "2018" }, { "authors": "R Z Farahani; M Hekmatfar", "journal": "Springer Science & Business Media", "ref_id": "b11", "title": "Facility location: concepts, models, algorithms and case studies", "year": "2009" }, { "authors": "H Gao; S Ji", "journal": "", "ref_id": "b12", "title": "Graph u-nets", "year": "2019" }, { "authors": " Pmlr", "journal": "", "ref_id": "b13", "title": "", "year": "" }, { "authors": "I J Goodfellow; J Shlens; C Szegedy", "journal": "", "ref_id": "b14", "title": "Explaining and Harnessing Adversarial Examples", "year": "2015" }, { "authors": "Z Guo; H Wang", "journal": "IEEE Transactions on Industrial Informatics", "ref_id": "b15", "title": "A deep graph neural networkbased mechanism for social recommendations", "year": "2020" }, { "authors": "A Gupta; P Matta; B Pant", "journal": "Materials Today: Proceedings", "ref_id": "b16", "title": "Graph neural network: Current state of Art, challenges and applications", "year": "2021" }, { "authors": "W L Hamilton; Z Ying; J Leskovec", "journal": "", "ref_id": "b17", "title": "Inductive Representation Learning on Large Graphs", "year": "2017" }, { "authors": "X He; L Liao; H Zhang; L Nie; X Hu; T.-S Chua", "journal": "", "ref_id": "b18", "title": "Neural collaborative filtering", "year": "2017" }, { "authors": "H Hong; Y Lin; X Yang; Z Li; K Fu; Z Wang; X Qie; J Ye", "journal": "", "ref_id": "b19", "title": "HetETA: Heterogeneous information network embedding for estimating time of arrival", "year": "2020" }, { "authors": "W Hu; M Fey; M Zitnik; Y Dong; H Ren; B Liu; M Catasta; J Leskovec", "journal": "", "ref_id": "b20", "title": "Open Graph Benchmark: Datasets for Machine Learning on Graphs", "year": "2020" }, { "authors": "S Ji; S Pan; E Cambria; P Marttinen; S Y Philip", "journal": "IEEE transactions on neural networks and learning systems", "ref_id": "b21", "title": "A survey on knowledge graphs: Representation, acquisition, and applications", "year": "2021" }, { "authors": "Z Jiang; J Gu; M Liu; D Z Pan", "journal": "", "ref_id": "b22", "title": "Delving into effective gradient matching for dataset condensation", "year": "2022" }, { "authors": "G Jin; Y Liang; Y Fang; J Huang; J Zhang; Y Zheng", "journal": "", "ref_id": "b23", "title": "Spatio-Temporal Graph Neural Networks for Predictive Learning in Urban Computing: A Survey", "year": "2023" }, { "authors": "W Jin; X Tang; H Jiang; Z Li; D Zhang; J Tang; B Yin", "journal": "", "ref_id": "b24", "title": "Condensing graphs via one-step gradient matching", "year": "2022" }, { "authors": "W Jin; L Zhao; S Zhang; Y Liu; J Tang; N Shah", "journal": "", "ref_id": "b25", "title": "Graph Condensation for Graph Neural Networks", "year": "2022" }, { "authors": "J.-H Kim; J Kim; S J Oh; S Yun; H Song; J Jeong; J.-W Ha; H O Song", "journal": "", "ref_id": "b26", "title": "Dataset condensation via efficient synthetic-data parameterization", "year": "2022" }, { "authors": " Pmlr", "journal": "", "ref_id": "b27", "title": "", "year": "" }, { "authors": "T N Kipf; M Welling", "journal": "", "ref_id": "b28", "title": "Semi-Supervised Classification with Graph Convolutional Networks", "year": "2017" }, { "authors": "K Kong; G Li; M Ding; Z Wu; C Zhu; B Ghanem; G Taylor; T Goldstein", "journal": "", "ref_id": "b29", "title": "Robust optimization as data augmentation for large-scale graphs", "year": "2022" }, { "authors": "J Lee; I Lee; J Kang", "journal": "", "ref_id": "b30", "title": "Self-attention graph pooling", "year": "2019" }, { "authors": " Pmlr", "journal": "", "ref_id": "b31", "title": "", "year": "" }, { "authors": "S Lee; S Chun; S Jung; S Yun; S Yoon", "journal": "", "ref_id": "b32", "title": "Dataset condensation with contrastive signals", "year": "2022" }, { "authors": " Pmlr", "journal": "", "ref_id": "b33", "title": "", "year": "" }, { "authors": "G Li; C Xiong; A Thabet; B Ghanem", "journal": "", "ref_id": "b34", "title": "Deepergcn: All you need to train deeper gcns", "year": "2020" }, { "authors": "S Liu; K Wang; X Yang; J Ye; X Wang", "journal": "", "ref_id": "b35", "title": "Dataset distillation via factorization", "year": "2022" }, { "authors": "A Madry; A Makelov; L Schmidt; D Tsipras; A Vladu", "journal": "", "ref_id": "b36", "title": "Towards Deep Learning Models Resistant to Adversarial Attacks", "year": "2018" }, { "authors": "S.-M Moosavi-Dezfooli; A Fawzi; P Frossard", "journal": "", "ref_id": "b37", "title": "Deepfool: a simple and accurate method to fool deep neural networks", "year": "2016" }, { "authors": "C Morris; N M Kriege; F Bause; K Kersting; P Mutzel; M Neumann", "journal": "", "ref_id": "b38", "title": "TUDataset: A collection of benchmark datasets for learning with graphs", "year": "2020" }, { "authors": "C Morris; N M Kriege; F Bause; K Kersting; P Mutzel; M Neumann", "journal": "", "ref_id": "b39", "title": "Tudataset: A collection of benchmark datasets for learning with graphs", "year": "2020" }, { "authors": "T Nguyen; Z Chen; J Lee", "journal": "", "ref_id": "b40", "title": "Dataset Meta-Learning from Kernel Ridge-Regression", "year": "2021" }, { "authors": "T Nguyen; R Novak; L Xiao; J Lee", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b41", "title": "Dataset distillation with infinitely wide convolutional networks", "year": "2021" }, { "authors": "N Papernot; P Mcdaniel; I Goodfellow", "journal": "", "ref_id": "b42", "title": "Transferability in machine learning: from phenomena to blackbox attacks using adversarial samples", "year": "2016" }, { "authors": "E Ranjan; S Sanyal; P P Talukdar", "journal": "", "ref_id": "b43", "title": "ASAP: Adaptive Structure Aware Pooling for Learning Hierarchical Graph Representations", "year": "2020" }, { "authors": "K K Roy; A Roy; A M Rahman; M A Amin; A A Ali", "journal": "IEEE", "ref_id": "b44", "title": "Structure-Aware Hierarchical Graph Pooling using Information Bottleneck", "year": "2021" }, { "authors": "O Sener; S Savarese", "journal": "", "ref_id": "b45", "title": "Active Learning for Convolutional Neural Networks: A Core-Set Approach", "year": "2018" }, { "authors": "A Shafahi; M Najibi; M A Ghiasi; Z Xu; J Dickerson; C Studer; L S Davis; G Taylor; T Goldstein", "journal": "", "ref_id": "b46", "title": "Adversarial training for free! Advances in Neural Information Processing Systems", "year": "2019" }, { "authors": "Y Sui; X Wang; T Chen; X He; T.-S Chua", "journal": "", "ref_id": "b47", "title": "Inductive Lottery Ticket Learning for Graph Neural Networks", "year": "2022" }, { "authors": "S A Tailor; J Fernandez-Marques; N D Lane", "journal": "", "ref_id": "b48", "title": "Degree-quant: Quantization-aware training for graph neural networks", "year": "2020" }, { "authors": "L Toader; A Uta; A Musaafir; A Iosup", "journal": "IEEE", "ref_id": "b49", "title": "Graphless: Toward serverless graph processing", "year": "2019" }, { "authors": "F Tramèr; A Kurakin; N Papernot; I Goodfellow; D Boneh; P Mcdaniel", "journal": "", "ref_id": "b50", "title": "Ensemble adversarial training: Attacks and defenses", "year": "2017" }, { "authors": "P Velickovic; G Cucurull; A Casanova; A Romero; P Liò; Y Bengio", "journal": "", "ref_id": "b51", "title": "Graph Attention Networks", "year": "2018" }, { "authors": "J Wang; Y Wang; Z Yang; L Yang; Y Guo", "journal": "", "ref_id": "b52", "title": "Bigcn: Binary graph convolutional network", "year": "2021" }, { "authors": "K Wang; Y Liang; P Wang; X Wang; P Gu; J Fang; Y Wang", "journal": "", "ref_id": "b53", "title": "Searching Lottery Tickets in Graph Neural Networks: A Dual Perspective", "year": "2023" }, { "authors": "T Wang; J.-Y Zhu; A Torralba; A A Efros", "journal": "", "ref_id": "b54", "title": "Dataset distillation", "year": "2018" }, { "authors": "X Wang; X He; M Wang; F Feng; T.-S Chua", "journal": "", "ref_id": "b55", "title": "Neural graph collaborative filtering", "year": "2019" }, { "authors": "Y Wang; J Wang; Z Cao; A Barati Farimani", "journal": "Nature Machine Intelligence", "ref_id": "b56", "title": "Molecular contrastive learning of representations via graph neural networks", "year": "2022" }, { "authors": "M Welling", "journal": "", "ref_id": "b57", "title": "Herding dynamical weights to learn", "year": "2009" }, { "authors": "F Wu; A Souza; T Zhang; C Fifty; T Yu; K Weinberger", "journal": "", "ref_id": "b58", "title": "Simplifying graph convolutional networks", "year": "2019" }, { "authors": " Pmlr", "journal": "", "ref_id": "b59", "title": "", "year": "" }, { "authors": "J Wu; X Chen; K Xu; S Li", "journal": "", "ref_id": "b60", "title": "Structural entropy guided graph hierarchical pooling", "year": "2022" }, { "authors": " Pmlr", "journal": "", "ref_id": "b61", "title": "", "year": "" }, { "authors": "S Wu; F Sun; W Zhang; X Xie; B Cui", "journal": "ACM Computing Surveys", "ref_id": "b62", "title": "Graph neural networks in recommender systems: a survey", "year": "2022" }, { "authors": "Z Wu; S Pan; F Chen; G Long; C Zhang; S Y Philip", "journal": "IEEE transactions on neural networks and learning systems", "ref_id": "b63", "title": "A comprehensive survey on graph neural networks", "year": "2020" }, { "authors": "K Xu; W Hu; J Leskovec; S Jegelka", "journal": "", "ref_id": "b64", "title": "How Powerful are Graph Neural Networks?", "year": "2019" }, { "authors": "N Yang; K Zeng; Q Wu; J Yan", "journal": "", "ref_id": "b65", "title": "MoleRec: Combinatorial Drug Recommendation with Substructure-Aware Molecular Representation Learning", "year": "2023" }, { "authors": "Z Ying; J You; C Morris; X Ren; W Hamilton; J Leskovec", "journal": "Advances in neural information processing systems", "ref_id": "b66", "title": "Hierarchical graph representation learning with differentiable pooling", "year": "2018" }, { "authors": "Y You; T Chen; Z Wang; Y Shen", "journal": "", "ref_id": "b67", "title": "L2-gcn: Layer-wise and learned efficient training of graph convolutional networks", "year": "2020" }, { "authors": "X Yue; Z Wang; J Huang; S Parthasarathy; S Moosavinasab; Y Huang; S M Lin; W Zhang; P Zhang; H Sun", "journal": "Bioinformatics", "ref_id": "b68", "title": "Graph embedding on biomedical networks: methods, applications and evaluations", "year": "2020" }, { "authors": "H Zeng; H Zhou; A Srivastava; R Kannan; V K Prasanna", "journal": "", "ref_id": "b69", "title": "GraphSAINT: Graph Sampling Based Inductive Learning Method", "year": "2020-04-26" }, { "authors": "S Zhang; Y Liu; Y Sun; N Shah", "journal": "", "ref_id": "b70", "title": "Graph-less neural networks: Teaching old mlps new tricks via distillation", "year": "2021" }, { "authors": "B Zhao; H Bilen", "journal": "", "ref_id": "b71", "title": "Dataset condensation with differentiable siamese augmentation", "year": "2021" }, { "authors": " Pmlr", "journal": "", "ref_id": "b72", "title": "", "year": "" }, { "authors": "B Zhao; H Bilen", "journal": "", "ref_id": "b73", "title": "Dataset condensation with distribution matching", "year": "2023" }, { "authors": "B Zhao; K R Mopuri; H Bilen", "journal": "", "ref_id": "b74", "title": "Dataset condensation with gradient matching", "year": "2020" }, { "authors": "J Zhou; G Cui; S Hu; Z Zhang; C Yang; Z Liu; L Wang; C Li; M Sun", "journal": "AI open", "ref_id": "b75", "title": "Graph neural networks: A review of methods and applications", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 53.64, 503.39, 240.02, 23.75 ], "formula_id": "formula_0", "formula_text": "= (A ′ , X ′ , Y ′ ) with adjacency matrix A ′ ∈ R N ′ ×N ′" }, { "formula_coordinates": [ 3, 356.01, 65.81, 202.59, 31.12 ], "formula_id": "formula_1", "formula_text": "min S L (GNN θ S (A, X) , Y) s.t. θS = arg min θ L(GNN θ (A ′ , X ′ ), Y ′ ) (1)" }, { "formula_coordinates": [ 3, 343.04, 234.89, 215.56, 39.48 ], "formula_id": "formula_2", "formula_text": "min S T t=0 D ∇ θ ℓ S t (f θ t (St), Y ′ ), ∇ θ ℓ T t (f θ t (T ), Y) s.t. θt+1 = opt(θt, St)(2)" }, { "formula_coordinates": [ 3, 358.63, 412.18, 199.97, 34.02 ], "formula_id": "formula_3", "formula_text": "θ S t+1 = opt θ ℓ S t GNN θ S t A ′ , X ′ , Y ′ θ T t+1 = opt θ ℓ T t GNN θ T t (A, X) , Y(3)" }, { "formula_coordinates": [ 3, 364.91, 523.45, 193.69, 26.91 ], "formula_id": "formula_4", "formula_text": "dis(G S , G T ) = d2 i=1 (1 - G S i • G T i ∥G S i ∥ ∥G T i ∥ )(4)" }, { "formula_coordinates": [ 4, 87.5, 315.25, 205.6, 10.94 ], "formula_id": "formula_5", "formula_text": "X ′ ← X ′ -η1∇ X ′ D ′ if t % (ω1 + ω2) < ω1(5)" }, { "formula_coordinates": [ 4, 62.15, 448.68, 230.95, 27.83 ], "formula_id": "formula_6", "formula_text": "g ϕ ← g ϕ -η2∇ ϕ D ′ ⇒ A ′ = g ϕ X ′ with A ′ ij = σ((MLP ϕ ([X ′ i ; X ′ j ]) + MLP ϕ ([X ′ j ; X ′ i ]))/2) (6)" }, { "formula_coordinates": [ 4, 333.27, 253.9, 225.33, 38.31 ], "formula_id": "formula_7", "formula_text": "min θ t+1 E θ 0 ∼P θ 0 max θ * t ,∥δ∥ p ≤ε D ∇ θ t+1 ℓ S t , ∇ θ t+1 ℓ T t ℓ S t := ℓ S t f θ * t (St + δγ) ℓ T t := ℓ T t f θ * t (T ), Y(7)" }, { "formula_coordinates": [ 4, 342.03, 486.28, 216.57, 12.81 ], "formula_id": "formula_8", "formula_text": "δγ+1 = Π ||δ||∞≤ε δγ + α • D ∇ θ * t ℓ S t , ∇ θ * t ℓ T t (8)" }, { "formula_coordinates": [ 5, 110.4, 227.63, 182.56, 21.29 ], "formula_id": "formula_9", "formula_text": "D S t + = 1 M M γ=1 D γ t ∇ θ * t ℓ S t , ∇ θ * t ℓ T t(9)" }, { "formula_coordinates": [ 5, 57.16, 685.13, 232.91, 18.38 ], "formula_id": "formula_10", "formula_text": "R = ∇m δ D ∇ θ ℓ(f θ t (S + δ ⊙ m δ ⊙ mg), Y ′ ), ∇ θ ℓ(f θ t (T ), Y) (10" }, { "formula_coordinates": [ 5, 290.06, 697.46, 2.9, 6.05 ], "formula_id": "formula_11", "formula_text": ")" }, { "formula_coordinates": [ 5, 326.97, 122.73, 231.69, 22.11 ], "formula_id": "formula_12", "formula_text": "m g (i,j) = 1, if R i,j are the top-k largest entries 0, otherwise(11)" }, { "formula_coordinates": [ 5, 393.65, 180.58, 165.02, 12.69 ], "formula_id": "formula_13", "formula_text": "δ γ = δ ′ γ ⊙ m δ ⊙ m g,γ(12)" }, { "formula_coordinates": [ 5, 321.04, 270.83, 237.57, 35.69 ], "formula_id": "formula_14", "formula_text": "min S T -1 t=0 D ∇ θ ℓ(f θ t (S + δ ⊙ m δ ⊙ mg), Y ′ ), ∇ θ ℓ(f θ t (T ), Y)(13)" } ]
2024-03-25
[ { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Figure 1. Given only the input textual prompt, our system can autonomously detect and rectify the layout inconsistencies across various position requirements (a-d), object quantities (e-g), and resolutions (h-i)." }, { "figure_ref": [], "heading": "Abstract", "publication_ref": [], "table_ref": [], "text": "Diffusion models have recently achieved remarkable progress in generating realistic images. However, challenges remain in accurately understanding and synthesizing the layout requirements in the textual prompts. To align the generated image with layout instructions, we present a training-free layout calibration system SimM that intervenes in the generative process on the fly during inference time. Specifically, following a \"check-locate-rectify\" pipeline, the system first analyses the prompt to generate the target layout and compares it with the intermediate outputs to automatically detect errors. Then, by moving the located activations and making intra-and inter-map adjustments," }, { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b10", "b19", "b30", "b44", "b21", "b25", "b15", "b31", "b31", "b47", "b42", "b46", "b34", "b46" ], "table_ref": [], "text": "Text-to-image generation [11,20,29,31] has emerged as a promising application of AI-generated content (AIGC), demonstrating the remarkable ability to generate synthetic images from conditional text descriptions. This technology has attracted considerable attention in recent years due to its potential impact on various domains such as image customization [34,45], 3D content creation [22,26] and virtual reality [4]. Since achieving high-quality and diverse image generation is challenging, recent advancements have witnessed the rise of diffusion models [16,32]. Diffusion models employ a sequential generation process that gradually refines the generated images by iteratively conditioning on noise variables. This iterative refinement mechanism allows for an improvement in the fidelity and quality. Despite the effectiveness of diffusion models, a significant challenge remains: most text-to-image generators, typified by Stable Diffusion [32], show limitations in accurately understanding and interpreting textual layout instructions [12]. This can be regarded as a kind of \"hallucination\" [13,48], which refers to the phenomenon that the generated image is inconsistent with the prompt content. On the one hand, various textual descriptions include the relative relation \"a dog to the left of a cat\" and the superlative relation \"the crown on the bottom\", presenting an inherent difficulty for automated systems to parse and understand layout information. Besides, inaccuracies in spatial relations may be due to the prior knowledge embedded in pre-trained models, as the large dataset may contains certain biases or assumptions about object placement or orientation. To exemplify this point, consider the following situation: since the \"crown\" in the training images are predominantly positioned over the head of another organism, it becomes difficult to specify their occurrence below (Fig. 1-e).\nThese factors not only compromise the quality and fidelity of the generated images but also hinder the overall utility and user experience of text-to-image generation systems. Some efforts [43,47] attempt to address the issue by training auxiliary modules or fine-tuning diffusion models on datasets with layout annotations. Apart from the difficulty of collecting sufficient high-quality data, these resource-intensive methods require retraining for each given checkpoint, making them struggle to keep up with the rapid version iterations of base models.\nIn this paper, we delve into the exploration of layout calibration given a pre-trained text-to-image diffusion model. Consequently, we present a training-free real-time system SimM, which follows the proposed \"check-locate-rectify\" pipeline. The checking stage is first applied to mitigate the potential impact on the generation speed, where SimM generates approximate target layout for each object by parsing the prompt and applying heuristic rules. After comparing the target layout with the intermediate cross-attention maps, layout rectification can be initiated if there are layout inconsistencies, and SimM locates the misplaced objects during the localization stage. Finally, during the rectification stage, SimM transfers the located activations to the target regions, and further adjusts them with intra-/inter-map activation enhancement and suppression. The entire workflow only affects the generation process, avoiding any additional training or loss-based updates.\nWe conduct both quantitative and qualitative experiments to evaluate the effectiveness of the proposed SimM. Since the popular DrawBench dataset [35] only contains prompts with relative spatial relations, we present a new benchmark SimMBench that includes superlative descriptions composed of various orientations and objects, compensating for the diversity of textual prompts. Compared to the recent works [6, 25,47], which rely on precise target layout provided by the user, SimM achieves satisfactory correction results even when the target layout is not precise enough, leading to a significant improvement in the layout fidelity of the generated images." }, { "figure_ref": [ "fig_0" ], "heading": "Methodology", "publication_ref": [ "b31" ], "table_ref": [], "text": "In this paper, we aim to align the generated images with the layout requirements in the prompts, and present a layout calibration system that requires no additional fine-tuning. In Sec. 2.1, we first briefly review the publicly avaliable, stateof-the-art text-to-image generator, Stable Diffusion [32]. In Sec. 2.2, we introduce how to determine whether a layout correction should be initiated. And in Sec. 2.3, we detail the localization of activated regions on the merged crossattention maps. Finally, in Sec. 2.4, we present how the system rectifies the cross-attention activations according to the localized patterns and the target locations. An overview of the pipeline is illustrated in Fig. 2." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [ "b31", "b18", "b15" ], "table_ref": [], "text": "Stable Diffusion. Stable Diffusion (SD) [32] applies a hierarchical variational autoencoder (VAE) [19] to operate the diffusion process [16] in a low-dimensional latent space. Specifically, the VAE consisting of an encoder E and a decoder D is trained with a reconstruction objective. The encoder E encodes the given image x into latent features z, \"A lion and crown, the crown is on the bottom.\"" }, { "figure_ref": [], "heading": "Intramap", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Inter-map", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Source Area", "publication_ref": [], "table_ref": [], "text": "Target Area" }, { "figure_ref": [], "heading": "Move", "publication_ref": [], "table_ref": [], "text": "Min." }, { "figure_ref": [], "heading": "Activation Transfer Detail", "publication_ref": [ "b27", "b32" ], "table_ref": [], "text": "Cross-attn Map To applied in a text-to-image scenario, a pre-trained CLIP [28] text encoder encodes the input textual prompt into N tokens y, and a U-Net [33] consisting of convolution, self-attention, and L cross-attention layers is adopted as the denoiser ϵ θ . During training, given a noised latent z t and text tokens y at timestep t, the denoiser ϵ θ is optimized to remove the noise ϵ added to the latent code z:\nL = E z∼E(x),y,ϵ∼N (0,1),t ϵ -ϵ θ z t , t, y 2 2 . (1)\nDuring inference, a latent z T is sampled from the standard normal distribution N (0, 1). At each denoising step t ∈ [T, • • • , 1], z t-1 is obtained by removing noise from z t conditioned on the text tokens y. After the final denoising step, the decoder D maps the latent z 0 to an image x.\nCross-Modal Attention. The SD model leverages crossattention layers to incorporate textual cues for the control of the image generation process. Given the text tokens y and intermediate latent features z l , the cross-attention maps from the l-th layer A l ∈ R W l ×H l ×N can be derived as\nA l = Softmax Q l K l ⊤ √ d ,(2)\nwhere z l and y are projected to the query matrix Q and key matrix K, the dimension d is used to normalize the softmax values, and we omit the superscript t for notational clarity and generality. Existing studies [5,6] have proposed that for the object corresponding to the k-th token of the prompt, higher activations on the intermediate cross-attention maps\nA l k ∈ R W l ×H l\nindicate the approximate position where the object will appear. Therefore, we align the spatial location of generated objects with textual layout requirements by adjusting the activations on the cross-attention maps." }, { "figure_ref": [], "heading": "Check", "publication_ref": [ "b46", "b34", "b16" ], "table_ref": [], "text": "A key constraint for the real-time system is to minimize the influence on the generation speed. Therefore, SimM first (1) detects the presence of object layout requirements within the text and (2) assesses any discrepancies between the generated image and the specified layout requirements. Only if both conditions are met does the system take corrective action; otherwise, it continues with normal generation to avoid additional computational overhead. The exact implementation of the two-step inspection is discussed below.\nLayout requirements exist in textual prompts. Existing studies [6,47] have predominantly emphasized relative spatial relations that are more common in written language, such as \"a dog to the left of a cat\". However, we argue that superlative spatial relations, which refer to an object shares the same relation to all other objects, have been neglected by previous research and datasets [35]. For example, the phrase \"a flower on the left\" signifies that the flower is positioned to the left of all other objects, making it ideal for the leftmost target location. In practice, it is difficult for users to directly describe their layout requirements using multiple relative expressions at once, so more direct superlative expressions actually account for a larger number.\nTo effectively and efficiently capture both forms of expression in a straightforward manner, our system identifies specific positional keywords with predefined vocabulary (described in Supplementary Material). For relative spatial relations, we define five spatial relations, including left, right, above, below and between, with each relation containing a predefined vocabulary set. And for superlative spatial relations, we include additional vocabulary such as \"upper-left\" and \"lower-right\". The system filters out those prompts that contain words from the vocabulary set to determine the presence of layout requirements. In practice, such a simple check implementation achieves considerable accuracy with negligible additional computational overhead.\nDiscrepancy exists between the generated image and layout requirements. To determine whether the generated image is consistent with the layout requirements, the tar-get positions of all objects are necessary. For target layout generation, our system provides an efficient solution by performing a dependency parsing on the prompt following with heuristic rules. The dependency parsing can be implemented using an industrial-strength library such as spaCy [17]. After assigning syntactic dependency labels to tokens, SimM can parse the binary \"flower,leftmost' from the superlative \"a flower on the left\", and the triple \"dog,left of,cat\" from the relative \"a dog to the left of a cat\". Following pre-defined rules, the system first assigns target boxes to objects associated with superlative position terms. Then, the remaining relative triples (and quaternions if \"between\" exists) can be organized as a semantic tree, with nodes as objects and edges as spatial relations. By traversing the tree, the remaining space in the image is successively allocated. A detailed example of assignment can be found in Supplementary Material. For the object of the k-th token,\nb k = ( x k , y k , w k , h k ) ∈ [0, 1] 4\ndenotes the assigned bounding box, where ( x k , y k ) is the relative coordinates of the centre, w k and h k are the relative width and height of the box. And the absolute boundaries b l k for the l-th layer can be computed with the concrete size of the corresponding attention map. Note that the predicted box may not necessarily fit the size of the object and is commonly larger. However, thanks to subsequent activation transfer, this does not affect the rectification performance.\nOnce the target boxes are obtained, the system prepares to assess whether each generated object is aligned with its target position. One natural solution, using an object detector on the generated image, requires a restart of the generation after the assessment for rectification and significantly increases the overall latency. Therefore, SimM places the alignment confirmation in the first denoising step (i.e., the T -th step). Specifically, after deriving the cross-attention maps for all layers, a layered attention merging averages them to obtain a merged attention map:\nĀT = 1 L L l=1 A T,l ,(3)\nwhere • means that the maps are first upsampled to a uniform resolution of W 1 × H 1 before averaging. Then, for the object of the k-th token, SimM sums over the activations within ĀT k that correspond to the bounding box b 1 k . If the sum does not exceed a pre-defined threshold, the system predicts that the object will be generated in the wrong place." }, { "figure_ref": [], "heading": "Locate", "publication_ref": [], "table_ref": [], "text": "After confirming the initiation of the rectification, the system identifies the source activated region for each object during the early T loc denoising steps. Temporal Attention Merging. For each time step t ∈ [T, T -T loc ], the system simply saves the merged atten-" }, { "figure_ref": [], "heading": "Master Yoda", "publication_ref": [], "table_ref": [], "text": "A lion and the balloon, the balloon is on the right, the cloud is on the left.\nA dinosaur with cloud, cloud on the bottom.\nA dinosaur with the leaf, the leaf is on the bottom.\nA dinosaur with fog, fog on the bottom. A lion with a crown and flower, the crown is on the bottom, the flower is on the [left]/[upper right]. with a light saber.\nGray mountain on the upper-right, White daisy pink cupcake on the lower-left.\non the upper-left.\nA chick wearing sunglasses with diamonds on the bottom. tion map Āt without any modification. When the (T -T loc )-th denoising step is finished, the system performs another temporal merging on all stored maps, obtaining Ā ∈ R W 1 ×H 1 ×N that more stably indicates the source positions of generated objects:\nĀ = 1 T loc T t=T -T loc Āt .(4)\nActivated Region Localization. Given the temporalmerged attention map Ā, the system locates the current activated region for each object. This is implemented by sweeping Āk with a rectangular sliding window. In practice, we keep the size of the window consistent with the target box assigned by heuristic rules. And the activated region b l k in the l-th layer can be converted from the most salient window b 1 k found on Āk ." }, { "figure_ref": [], "heading": "Rectify", "publication_ref": [], "table_ref": [], "text": "After the (T -T loc )-th denoising step, the system starts to modify the generated cross-attention map for rectification. Note that in the following statements, A denotes the cross-attention maps generated before applying Softmax(•).\nBesides, the maps from the first and last cross-attention layers are not modified as we have observed that doing so improves the quality of object generation in practice. Activation Transfer. Since the size of the localized source activated region b l k and the assigned target box b l k are kept the same, the activation values of the source region can be directly duplicated to the target region, while the original region is filled with minimum values. In this way, SimM easily realizes the movement of the object. Even if the target boxes are obtained by other means (e.g., user-provided) rather than heuristic rules, this simple transfer remains valid after reshaping the source activated region. Intra-Map Activation Enhancement and Suppression.\nIn practice, we have found that some objects fail to appear due to the insufficient activations in the cross-attention maps. Also, one object may not be exactly in its target area even after the transfer. Therefore, for the object of the k-th token, the system continues to modify the attention map by enhancing the activations in b l k . Meanwhile, to avoid the object appearing in non-target areas, the signal outside b l k is suppressed. Formally, we have\nA t,l k (i, j) ← A t,l k (i, j) • α if (i, j) in b l k A t,l k (i, j) / α if (i, j) not in b l k ,(5)\nwhere l ∈ [2, L -1], and the hyperparameter α ∈ R + denotes the strength of the adjustment.\nInter-Map Activation Enhancement and Suppression.\nThe intra-map activation adjustment further enhances the control over the position of individual objects. However, due to the lack of interference between attention maps, the overlap of activated areas on different maps can lead to conflict and confusion in the generation of multiple objects. To avoid the issue, given its corresponding attention map A t,l k of each object, our system generates an adjustment mask M t,l k for other maps:\nM t,l k = 1 -Softmax(A t,l k ),(6)\nwhere the mask adjusts the attention value of other maps:\nA t,l g ← M t,l k ⊙ A t,l g , for g ∈ [1, N ] and g ̸ = k.(7)\nIn this way, after applying Softmax(•), the activated regions on different maps can be staggered to reduce conflicts. " }, { "figure_ref": [], "heading": "Layout-based methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Layout-Guidance", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b34", "b31", "b37" ], "table_ref": [], "text": "Datasets. We utilize different datasets to evaluate the effectiveness for both relative and superlative layout requirements. For prompts involving relative spatial relations, we use a subset of 20 prompts from the DrawBench [35] dataset, which is a common choice of previous works [25]. However, there is a lack of an appropriate dataset that addresses prompts concerning superlative spatial relations. Therefore, we present a benchmark SimMBench consisting of 203 prompts, where each prompt contains 1 to 4 objects, and each object has superlative layout requirements. Details are provided in Supplementary Material.\nBaselines. We select Stable Diffusion [32], Layout-Guidance [6], Attention-Refocusing [25] and BoxDiff [40] as baselines in the main comparison. We adopt the official implement and default hyperparameters for all baselines. Evaluation Metrics. The generation accuracy [25] is adopted as the primary evaluation metric. Specifically, a generated image will only be considered correct if all objects are correctly generated and their spatial positions or relations, color, and other possible attributes align with the corresponding phrases in the prompt. Following previous studies [40], we also report the CLIP-Score [15], which measures the similarity between the input text features and the generated image features. While this metric has been widely used to explicitly evaluate the fidelity to the text prompt, we highlight its reliability is limited, since CLIP struggles to understand spatial relationships and take them into account when scoring image-text pairs [38]. Implementation Details. We adopt the DDIM scheduler [37] with 20 denoising steps (i.e., T = 20). And the number of localization steps T loc is set to 1 as default.\nThe ratio of classifier-free guidance is set to 5. Adjustment strength α is set to 10. Four images are randomly generated for each evaluation prompt." }, { "figure_ref": [ "fig_3" ], "heading": "Main Results", "publication_ref": [], "table_ref": [], "text": "Quantitative results. Tab. 1 shows the quantitative comparison results between different baselines and our SimM.\nOn the DrawBench dataset, our SimM achieves the highest generation accuracy and CLIP-Score, while outperforming the baselines by a significant margin of 9.5% in terms of accuracy. And on the SimMBench dataset, SimM not only surpasses the baselines by 14.45% in terms of accuracy but also achieves comparable CLIP-Score. The results signify the effectiveness of SimM system in understanding both relative and superlative relationships, leading to satisfactory rectification of layout inconsistencies.\nQualitative results. In Fig. 4, we present more multiresolution images generated by SimM. T loc = 20\nP D x q G p V o Q h t E c a X b E T a U M 0 k b l l l O 2 7 G m W E S c t q L x 7 c x v P V F t m J J 1 O 4 l p K P B Q s g E j 2 D r p o f 6 Y c k W m N 0 G v W P L L / h x o l Q Q Z K U G G W q / 4\n= \" > A A A B 8 X i c b V B N S w M x E J 2 t X 7 V + V T 1 6 C R b B U 9 m V o l 6 E g h e P F b p t s V 1 L N s 2 2 o d l k S b J C W f o v v H h Q x K v / x p v / x r T d g 7 Y + G H i 8 N 8 P M v D D h T B v X / X Y K a + s b m 1 v F 7 d L O 7 t 7 + Q f n w q K V l q g j 1 i e R S d\nA bicycle on the upper-left, a table on the middle.\nA white daisy on the lower-right. terms of layout. However, they each still suffer from respective issues. Taking the second row as an example, BoxDiff exhibits limitations in effectively controlling the layout, where the white daisies that should only appear on the right side also appear on the left and middle as well. And the images generated by Layout-Guidance and Attention-Refocusing exhibit noticeable blockiness, tearing artifacts and object deformations, which significantly degrade the quality. In contrast, our system maintains excellent image quality while rectifying the layout. We attribute this to the activation localization and movement, which allows us to preserve the generative capabilities of the base model to the maximum extent, without relying on rigid constraints imposed by loss functions." }, { "figure_ref": [ "fig_6" ], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "In Fig. 6, we visualize the generated images after removing the intra-and inter-map activation adjustments from SimM.\nAfter removing the intra-map adjustment, objects are missing (first two rows) or specified objects appear outside their target positions (the last row). This illustrates that the mechanism significantly contributes to controlling the placement of objects. Meanwhile, removing the inter-map adjustment increases the likelihood of interference from activations of other maps, which can disrupt the generation of objects in their target positions, ultimately resulting in erroneous or incomplete object generation." }, { "figure_ref": [ "fig_10", "fig_12" ], "heading": "Further Analysis", "publication_ref": [ "b2" ], "table_ref": [], "text": "Effect of the number of localization steps T loc . In Fig. 7, we present the visual results with layout rectification initiated at different denoising steps during the generation. It can be observed that starting the rectification from the first denoising step yields better results, ensuring that each object appears in its designated position. The later the rectification starts, the worse the correction effect, thus compromising the fidelity of the generated images. This observation is consistent with the conclusion from previous stud- \nL Y T R V G E n L b C 0 e 3 U b z 1 R p V k s H 8 w 4 o Y H A g W Q R I 2 i s 9 N h F n g z x 5 s L r l c p e x Z v B X S Z + T s q Q o 9 4 r f X X 7 M U k F l Y Z w 1 L r j e 4 k J M l S G E U 4 n x W 6 q a Y J k h A P a s V S i o D r I Z g d P 3 F O r 9 N 0 o V r a k c W f q 7 4 k M h d Z j E d p O g W a o F 7 2 p + J / X S U 1 0 H W R M J q m h k s w X R S l 3 T e x O v 3 f 7 T F F i + N g S J I r Z W 1 0 y R I X E 2 I y K N g R / 8 e V l 0 j y v + J e V 6 n 2 1 X K v m c R T g G E 7 g D H y 4 g h r c Q R 0 a Q E D A M 7 z C m 6 O c F + f d + Z i\n3 r j j 5 z B H 8 g f P 5 A / Z 1 j 9 M = < / l a t e x i t > ↵ = 50\nA table on the right.\nA yellow sunflower on the left. ies [3,14], where diffusion models establish the layout in early stages and refine the appearance details in later stages.\nEffect of adjustment strength α. We scale α from 0.1 to 50 and illustrate some generated cases in Fig. 8. Setting α to 0.1 essentially reverses the enhancement and suppression, resulting in objects appearing in non-designated positions. And setting α to 1 essentially removes the intra-map attention adjustment, leading to less effective layout rectification. Further increasing the α to 10 yields facilitates rectification and provides better control over the layout. However, excessively large values of α (e.g., setting it to 50) can degrade the quality of the generated images while imposing stricter constraints on the object positions." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b30", "b40", "b48", "b31", "b31", "b26", "b0", "b6", "b41", "b19", "b42", "b46", "b26", "b46", "b20", "b26", "b23" ], "table_ref": [], "text": "Text-to-Image Generation. Earlier works studied text-toimage generation in the context of generative adversarial networks (GANs) [31,39,41,49] [32] trims off pixel-level redundancy by applying an autoencoder to project images into latent space and generating latent-level feature maps with the diffusion process. And to align with the provided textual input, Stable Diffusion [32] further employs cross-attention mechanism to inject textual condition into the diffusion generation process. Layout Control in Diffusion Models. Existing progress fails to fully understand the spatial relations of objects in the free-form textual descriptions and reflect them in the synthesized image, especially for complex scenes. Therefore, jointly conditioning on text and layout has been studied, where layout control signals can be bounding boxes [27,40], segmentation maps [1,7,42], and key points [46]. Several methods extend the Stable Diffusion model by incorporating layout tokens into attention layers [20,43,47] or training layout-aware adapters [27]. However, requiring additional training on massive layout-image pairs, these approaches lack flexibility in the base model and may degrade the quality of the generated images. Therefore, recent efforts [6, 25, 40] design loss conditioned on layout constraints to update the noised latent together with denoising. Layout-Guidance [6] computes the loss by applying the energy function on the cross-attention map, Attention-Refocusing [25] constrains both cross-attention and selfattention to \"refocus\" on the correct regions, and BoxDiff [40] designs inner-box, outer-box, and corner spatial constraints. However, they introduce extra computational cost for gradient update, which affects the speed of generation. In contrast, our system directly modifies the activations to conform to the target for rectification, minimizing the computation overhead. Layout Generation.\nPrevious layout-to-image studies [6,47] have largely neglected the discussion on layout generation and heavily relied on users to directly provide accurate layout boxes for objects. However, this necessitates assessing the legality of user input and increases the learning and interaction difficulty for users. Moreover, we have observed a substantial decline in the quality of generated images when the provided boxes are insufficiently accurate. Latest efforts [21,25,27] have turned to large language models like GPT-4 [24] by creating appropriate prompting templates to generate layouts, while each API request adds response time and incurs additional costs. In this paper, our system provides a light-weight solution based on dependency parsing following with heuristic rules." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a training-free layout calibration system SimM for text-to-image generators, which aligns the synthesized images with layout instructions in a post-remedy manner. Following a \"check-locate-rectify\" pipeline, SimM first decides whether to perform the layout rectification by checking the input prompt and the intermediate cross-attention maps. During the rectification, the system identifies and relocates the activations of mispositioned objects, where the target positions are generated by analysing the prompt with dependency parsing and heuristic rules. To comprehensively evaluate the effectiveness of SimM, we present a benchmark called SimMBench, which covers both simple and complex layouts described in terms of superlative relations. Through extensive qualitative and quantitative experiments, we demonstrate our superiority in improving generation fidelity and quality." }, { "figure_ref": [], "heading": "Check, Locate, Rectify: A Training-Free Layout Calibration System for", "publication_ref": [], "table_ref": [], "text": "Text-to-Image Generation" }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [], "table_ref": [], "text": "A. Relation Vocabulary for Checking\nOur SimM determines the existence of layout requirements by checking whether any words from our predefined relation vocabulary are present in the prompt. According to the semantic similarity, the vocabulary contains six categories:\n• left: \"left\", \"west\" • right: \"right\", \"east\" • above: \"above\", \"over\", \"on\", \"top\", \"north\" • below: \"below\", \"beneath\", \"underneath\", \"under\", \"bottom\", \"south\" • between: \"between\", \"among\", \"middle\" • additional superlative: \"upper-left\", \"upper-right\", \"lower-left\", \"lower-right\"\nNote that (1) The \"additional superlative\" category serves as a supplement for words that have not been covered. In the given context, words such as \"left\" and \"above\" can also represent the superlative relations.\n(2) This vocabulary can easily be extended according to the needs of the dataset." }, { "figure_ref": [], "heading": "B. Superlative Predefined Positions", "publication_ref": [], "table_ref": [], "text": "For each object associated with a superlative relation, the relative bounding box b = ( x, y, w, h) is assigned as follows:\n• " }, { "figure_ref": [], "heading": "C. An Example of Target Layout Generation", "publication_ref": [], "table_ref": [], "text": "To facilitate understanding of how SimM parses the prompt and generates the target bounding box for each object with a set of heuristic rules, we show an example in Fig. 9 to illustrate it more clearly. Specifically, the process can be roughly divided into four steps:\n1. Semantic parsing. SimM parses the superlative tuples and relative triplets from the prompt. And the relative triples can be organized as a semantic tree, with nodes as objects and edges as spatial relations.\n2. Assign the superlative boxes. Given each superlative tuple, SimM assigns a predefined target box to the object according to its superlative position term. Object set. The predefined object set consists of 28 different items as follows:\n• single-word: \"backpack\", \"flower\", \"crown\", \"towel\", \"scarf \", \"beach\", \"clouds\", \"tree\", \"table\", \"book\", \"handbag\", \"bus\", \"bicycle\", \"car\", \"motorcycle\", \"cat\", \"dog\", \"horse\" • phrase: \"chocolate cookie\", \"strawberry cake\", \"vanilla ice cream cone\" • with color: \"yellow sunflower\", \"gray mountain\", \"white daisy\", \"pink cupcake\", \"red tomato\", \"golden saxophone\", \"green broccoli\"" }, { "figure_ref": [], "heading": "E. Detailed Accuracies on SimMBench", "publication_ref": [], "table_ref": [], "text": "In Tab. 2, we report the accuracies when the number of objects in the prompt is different. It can be observed that our SimM outperforms the baselines in all cases. Furthermore, despite the simplicity of the case with a single object, the accuracies do not show a clear downward trend as the number of objects increases. The difficulty of accurately representing the layout is also influenced by the specific layout requirements of the objects and their context. " }, { "figure_ref": [], "heading": "F. Additional Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "F.1. Latency Comparison for Layout Generation", "publication_ref": [ "b26", "b23" ], "table_ref": [], "text": "Since our SimM system presents a new solution for generating the target layout, we provide a brief discussion of the observed increase in latency here. Existing layout-toimage works [25,27] commonly rely on GPT-4 [24], however, each invocation of the API requires a response time of ∼3 seconds. In contrast, thanks to the industrial-strength library, our proposed solution requires an average of only 0.006 seconds for each prompt and does not require a GPU. This significantly improves the user experience for realtime text-to-image generators." }, { "figure_ref": [ "fig_11" ], "heading": "F.2. Generalization Across Diverse Styles", "publication_ref": [], "table_ref": [], "text": "In practical scenarios, users often request the text-to-image generators to produce images in specific styles. In Fig. 11, we show that the stylistic demands for generated images do not hinder the rectification of the layout by SimM." }, { "figure_ref": [], "heading": "F.3. Qualitative Results on Other Benchmarks", "publication_ref": [], "table_ref": [], "text": "We additionally present the qualitative results obtained on two latest benchmarks, HRS [2] and TIFA [18], in Fig. 10. These two benchmarks, similar to DrawBench, excessively focus on relative spatial relations. Due to the cost of comprehensive manual evaluation, we take quantitative evaluation on these benchmarks as future work." }, { "figure_ref": [ "fig_17" ], "heading": "F.4. Comparison with Training-Based Method", "publication_ref": [ "b46" ], "table_ref": [], "text": "LayoutDiffusion [47] is a representative approach in training auxiliary modules to embed the layout information into intermediate features for controlling. However, it is constrained to fixed categories, thereby rendering it unsuitable for various datasets including Drawbench. To compare our SimM with LayoutDiffusion, we select prompts that only includes valid objects for LayoutDiffusion from our SimM-Bench. As observed in Fig. 12, the limitation of layout significantly reduces the generation quality of LayoutDiffusion, resulting in its performance being far inferior to SimM." }, { "figure_ref": [], "heading": "LayoutDiffusion", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Ours", "publication_ref": [], "table_ref": [], "text": "Motorcycle on the bottom.\nBicycle on the upper-left, towel on the lower-right." }, { "figure_ref": [], "heading": "Layout", "publication_ref": [], "table_ref": [], "text": "Motorcycle on the right.\nBicycle on the lower-left, towel on the upper-right. " }, { "figure_ref": [ "fig_18" ], "heading": "F.5. Failure Case Analysis", "publication_ref": [], "table_ref": [], "text": "In Fig. 13, we present typical cases of what the human evaluators perceive as errors. The first case is the repeated generation of objects with some in the wrong position. The second case is that multiple objects interact with each other during generation, resulting in incomplete generation. The third case includes missing or unclear objects. These errors are mostly due to the fact that a single adjustment strength parameter α may not be optimal for all generation. This results in insufficient activation enhancement or suppression on the attention map, leading to inaccuracies in generating all objects accurately or preventing the repeated generation." }, { "figure_ref": [ "fig_19", "fig_20" ], "heading": "F.6. Additional Qualitative Comparison Results", "publication_ref": [], "table_ref": [], "text": "To show the effectiveness of SimM, we illustrate additional qualitative results in Figs. 14 and15." }, { "figure_ref": [], "heading": "Stable Layout-Attention-BoxDiff", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Ours refocusing guidance Diffusion", "publication_ref": [], "table_ref": [], "text": "Gray mountain on the upper-left, pink cupcake on the lower-right." }, { "figure_ref": [], "heading": "Input Prompt", "publication_ref": [], "table_ref": [], "text": "White daisy on the bottom. " }, { "figure_ref": [], "heading": "Input Prompt", "publication_ref": [], "table_ref": [], "text": "Input Layout Input Layout" } ]
A cartoon fox with clouds on the left. (b) A cartoon fox with clouds on the right. (c) A cartoon fox with clouds on the top. (d) A cartoon fox with clouds on the bottom (e) A lion with a crown and flowers, the (i) Eiffel Tower with a storm on the bottom (h) A boy on the left looked up at the aurora on the top right. Before After Before After Before After Before After crown on the bottom, flowers on the top. (f) An angel, a flower on the top, an apple on the bottom, a mountain on the top. (g) A cat on the bottom right, a lamp on the top, a cake on the bottom, balloons on the left. . .
Check, Locate, Rectify: A Training-Free Layout Calibration System for Text-to-Image Generation
[ { "figure_caption": "Figure 2 .2Figure 2. The \"check-locate-rectify\" pipeline of SimM, intervening in the generative process on the fly during inference.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. A detailed illustration of our SimM system. R means repeating, C means concatenating.and the decoder D outputs the reconstructed image x from the latent, i.e., x = D(z) = D(E(x)). To applied in a text-to-image scenario, a pre-trained CLIP[28] text encoder encodes the input textual prompt into N tokens y, and a U-Net[33] consisting of convolution, self-attention, and L cross-attention layers is adopted as the denoiser ϵ θ . During training, given a noised latent z t and text tokens y at timestep t, the denoiser ϵ θ is optimized to remove the noise ϵ added to the latent code z:", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Examples of multi-resolution image generated by SimM.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Qualitative comparisons on DrawBench and SimMBench. Textual prompts require to generate multiple objects with relative and superlative spatial relations.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Ours w/o intra-map w/o inter-map Bicycle on the right, towel on the left. Clouds on the top, crown on the bottom. Dog on the lower-right, handbag on the upper-left, and tree on the middle.", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Ablation study of intra-/inter-map activation adjustment. The removal of intra-map adjustment leads to the omission of objects or positional errors, while the removal of inter-map adjustment results in fragmented or erroneous object generation.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 5 shows a visual comparison between the proposed SimM and the competing baselines. Without additional layout guidance, the images generated by the vanilla Stable Diffusion fail to convey the layout requirements specified by the textual prompt while also suffering from missing objects. The three baseline models can enhance the accuracy of the generation in < l a t e x i t s h a 1 _ b a s e 6 4 = \" 4 Z c C + 4 Q 3 p P i o x c z B a I T L u L m D 6 q M = \" > A A A B 8 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e x K U C 9 C w I v H C H l J s o b Z y S Q Z M o 9 l Z l Y I S 7 7 C i w d F v P o 5 3 v w b J 8 k e N L G g o a j q p r s r i j k z 1 v e / v d z a + s b m V n 6 7 s L O 7 t 3 9 Q", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "1 e 0 r k g g q L e H Y m E 7 g x z Z M s b a M c D o t d B N D Y 0 z G e E g 7 j k o s q A n T + c F T d O a U P h o o 7 U p a N F d / T 6 R Y G D M R k e s U 2 I 7 M s j c T / / M 6 i R 1 c h y m T c W K p J I t F g 4 Q j q 9 D s e 9 R n m h L L J 4 5 g o p m 7 F Z E R 1 p h Y l 1 H B h R A s v 7 x K m h f l 4 L J c u a + U q p U s j j y c w C m c Q w B X U I U 7 q E E D C A h 4 h l d 4 8 7 T 3 4 r 1 7 H 4 v W n J f N H M M f e J 8 / c 2 e Q J Q = = < / l a t e x i t > T loc = 1 < l a t e x i t s h a 1 _ b a s e 6 4 = \" v e x 5 L c + H u f K / N Y c p T 7 f z k h a 9 m j Q", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "U K s K W e C + o Y Z T j u J o j g O O W 2 H 4 9 u Z 3 3 6 i S j M p m m a S 0 C D G Q 8 E i R r C x 0 k P z M e O S T G 8 8 t 1 + u u F V 3 D r R K v J x U I E e j X / 7 q D S R J Y y o M 4 V j r r u c m J s i w M o x w O i 31 U k 0 T T M Z 4 S L u W C h x T H W T z i 6 f o z C o D F E l l S x g 0 V 3 9 P Z D j W e h K H t j P G Z q S X v Z n 4 n 9 d N T X Q d Z E w k q a G C L B Z F K U d G o t n 7 a M A U J Y Z P L M F E M X s r I i O s M D E 2 p J I N w V t + e Z W 0 L q r e Z b V 2 X 6 v U a 3 k c R T i B U z g H D 6 6 g D n f Q A B 8 I C H i G V3 h z t P P i v D s f i 9 a C k 8 8 c w x 8 4 n z / k c Z B f < / l a t e x i t > T loc = 10 < l a t e x i t s h a 1 _ b a s e 6 4 = \" t 9 g S 8 P O D 1 J 8 8 0 S 7 z E e / m b u O MC C 0 = \" > A A A B 8 X i c b V B N S w M x E J 2 t X 7 V + V T 1 6 C R b B U 9 m V V r 0 I B S 8 e K / Q L 2 7 V k 0 2 w b m k 2 W J C u U p f / C i w d F v P p v v P l v T N s 9 a O u D g c d 7 M 8 z M C 2 L O t H H d b y e 3 t r 6 x u Z X f L u z s 7 u 0 f F A + P W l o m i t A m k V y q T o A 1 5 U z Q p m G G 0 0 6 s K I 4 C T t v B + H b m t 5 + o 0 k y K h p n E 1 I / w U L C Q E W y s 9 N B 4 T L k k 0 x u v 2 i + W 3 L I 7 B 1 o l X k Z K k K H e L 3 7 1 B p I k E R W G c K x 1 1 3 N j 4 6 d Y G U Y 4 n R Z 6 i a Y x J m M 8 p F 1 L B Y 6 o 9 t P 5 x V N 0 Z p U B C q W y J Q y a q 7 8 n U h x p P Y k C 2 x l h M 9 L L 3 k z 8 z + s m J r z 2 U y b i x F B B F o v C h C M j 0 e x 9 N G C K E s M n l m C i m L 0 V k R F W m B g b U s G G 4 C 2 / v E p a F 2 X v s l y 5 r 5 R q l S y O P J z A K Z y D B 1 d Q g z u o Q x M I C H i G V3 h z t P P i v D s f i 9 a c k 8 0 c w x 8 4 n z / s B Z B k < / l a t e x i t > T loc = 15 < l a t e x i t s h a 1 _ b a s e 6 4 = \" r C T l s / r R A j c k i o d e X b 8 H e u a 0 WW c = \" > A A A B 8 X i c b V B N S w M x E J 2 t X 7 V + V T 1 6 C R b B U 9 k t p X o R C l 4 8 V u g X t m v J p t k 2 N Js s S V Y o S / + F F w + K e P X f e P P f m L Z 7 0 N Y H A 4 / 3 Z p i Z F 8 S c a e O 6 3 0 5 u Y 3 N r e y e / W 9 j b P z g 8 K h 6 f t L V M F K E t I r l U 3 Q B r y p m g L c M M p 9 1 Y U R w F n H a C y e 3 c 7 z x R p Z k U T T O N q R / h k W A h I 9 h Y 6 a H 5 m H J J Z j c V d 1 A s u W V 3 A b R O v I y U I E N j U P z q D y V J I i o M 4 V j r n u f G x k + x M o x w O i v 0 E 0 1 j T C Z 4 R H u W C h x R 7 a e L i 2 f o w i p D F E p l S x i 0 U H 9 P p D j S e h o F t j P C Z q x X v b n 4 n 9 d L T H j t p 0 z E i a G C L B e F C U d G o v n 7 a M g U J Y Z P L c F E M X s r I m O s M D E 2 p I I N w V t 9 e Z 2 0 K 2 W v V q 7 e V 0 v 1 a h Z H H s 7 g H C 7 B g y u o w x 0 0 o A U E B D z D K 7 w 5 2 n l x 3 p 2 P Z W v O y W Z O 4 Q + c z x / l 9 p B g < / l a t e x i t >", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Effect of the number of localization steps T loc . Initiating layout rectification at an earlier stage enhances the fidelity.", "figure_data": "", "figure_id": "fig_10", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "< l a t e x i t s h a 1 _ 1 <11b a s e 6 4 = \" s 3 J T u Y v D B L q U c z S T 4 x a 6 K G Y 1 p Y A = \" > A A A B 8 X i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 i k q B e h 4 M V j B f u B b S i T 7 a Z d u t m E 3 Y 1 Q S v + F F w + K e P X f e P P f u G 1 z 0 N Y H A 4 / 3 Z p i Z F 6 a C a + N 5 3 0 5 h b X 1 j c 6 u 4X d r Z 3 d s / K B 8 e N X W S K c o a N B G J a o e o m e C S N Q w 3 g r V T x T A O B W u F o 9 u Z 3 3 p i S v N E P p h x y o I Y B 5 J H n K K x 0 m M X R T r E G 8 / 1 e + W K 5 3 p z k F X i 5 6 Q C O e q 9 8 l e 3 n 9 A s Z t J Q g V p 3 f C 8 1 w Q S V 4 V S w a a m b a Z Y i H e G A d S y V G D M d T O Y X T 8 m Z V f o k S p Q t a c h c / T 0 x w V j r c R z a z h j N U C 9 7 M / E / r 5 O Z 6 D q Y c J l m h k m 6 W B R l g p i E z N 4 n f a 4 Y N W J s C V L F 7 a 2 E D l E h N T a k k g 3 B X 3 5 5 l T Q v X P / S r d 5 X K 7 V q H k c R T u A U z s G H K 6 j B H d S h A R Q k P M M r v D n a e X H e n Y 9 F a 8 H J Z 4 7 h D 5 z P H 1 4 Y k A c = < / l a t e x i t > ↵ = 0.1< l a t e x i t s h a 1 _ b a s e 6 4 = \" e h y w R n D q 3 W p 4 o D j d 5 U P H G z D 5 q i o = \" >A A A B 7 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 m k q B e h 4 M V j B f s B b S i T 7 a Z d u t n E 3 Y 1 Q Q v + E F w + K e P X v e P P f u G 1 z 0 N Y H A 4 / 3 Z p i Z F y S C a + O 6 3 0 5 h b X 1 j c 6 u 4 X d r Z 3 d s / K B 8 e t X S c K s q a N B a x 6 g S o m e C S N Q 0 3 g n U S x T A K B G s H 4 9 u Z 3 3 5 i S v N Y P p h J w v w I h 5 K H n K K x U q e H I h n h j d c v V 9 y q O w d Z J V 5 O K p C j 0 S 9 / 9 Q Y x T S M m D R W o d d d z E + N n q A y n g k 1 L v V S z B O k Y h 6 x r q c S I a T + b 3 z s l Z 1 Y Z k D B W t q Q h c / X 3 R I a R 1 p M o s J 0 R m p Fe 9 m b i f 1 4 3 N e G 1 n 3 G Z p I Z J u l g U p o K Y m M y e J w O u G D V i Y g l S x e 2 t h I 5 Q I T U 2 o p I N w V t + e Z W 0 L q r e Z b V 2 X 6 v U a 3 k c R T i B U z g H D 6 6 g D n f Q g C Z Q E P A M r / D m P D o v z r v z s W g t O P n M M f y B 8 / k D f 9 + P l Q = = < / l a t e x i t >↵ = l a t e x i t s h a 1 _ b a s e 6 4 = \" r 2 U 8 d P s z Y R 4 6 T e 0 9 2 X S A 0 8 1 d z s o = \" > A A A B 8 H i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 m k q B e h 4 M V j B f s h b S i T 7 a Z d u p u E 3 Y 1 Q Q n + F F w + K e P X n e P P f u G 1 z 0 N Y H A 4 / 3 Z p i Z F y S C a + O 6 3 0 5 h b X 1 j c 6 u 4 X d r Z 3 d s / K B 8 e t X S c K s q a N B a x 6 g S o m e A R a x p u B O s k i q E M B G s H 4 9 u Z 3 3 5 i S v M 4 e j C T h P k S h x E P O U V j p c c e i m S E N 5 7 b L 1 f c q j s H W S V e T i q Q o 9 E v f / U G M U 0 l i w w V q H X X c x P j Z 6 g M p 4 J N S 7 1 U s w T p G I e s a 2 m E k m k / m x 8 8 J W d W G Z A w V r Y i Q + b q 7 4 k M p d Y T G d h O i W a k l 7 2 Z + J / X T U 1 4 7 W c 8 S l L D I r p Y F K a C m J j M v i c D r h g 1 Y m I J U s X t r Y S O U C E 1 N q O S D c F b f n m V t C 6 q 3 m W 1 d l + r 1 G t 5 H E U 4 g V M 4 B w + u o A 5 3 0 I A m U J D w D K / w 5 i j n x X l 3 P h a t B S e f O Y Y / c D 5 / A P B h j 8 8 = < / l a t e x i t > ↵ = 10 < l a t e x i t s h a 1 _ b a s e 6 4 = \" M O d u A S 8 Y 6 x s D w l g q F X E S c o Q 7 6 K w = \" > A A A B 8 H i c b V D L S g N B E O z 1 G e M r 6 t H L Y h A 8 h V 2 J j 4 s Q 8 O I x g n l I s o T e y W w y Z G Z 2 m Z k V w p K v 8 O J B E a 9 + j j f / x k m y B 0 0 s a C i q u u n u C h P O t P G 8 b 2 d l d W 1 9 Y 7 O w V d z e 2 d 3 b L x 0 c N n W c K k I b J O a x a o e o K W e S N g w z n", "figure_data": "", "figure_id": "fig_11", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Effect of adjustment strength α. A value of 10 yields better layout stabilization and generation quality.", "figure_data": "", "figure_id": "fig_12", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "left: (0.20, 0.50, 0.33, 1.00) • right: (0.80, 0.50, 0.33, 1.00) • above: (0.50, 0.20, 1.00, 0.33) • below: (0.50, 0.80, 1.00, 0.33) • middle: (0.50, 0.50, 0.50, 0.50) • upper-left: (0.25, 0.25, 0.50, 0.50) • upper-right: (0.75, 0.25, 0.50, 0.50) • lower-left: (0.25, 0.75, 0.50, 0.50) • lower-right: (0.75, 0.75, 0.50, 0.50)", "figure_data": "", "figure_id": "fig_13", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "3 . 1 . 3 .313Traverse the semantic tree for a global view. By traversing the tree, SimM organizes the global layout of the remaining objects. 4. Assign the relative boxes. SimM allocates the remaining space to the objects associated with superlative relations.D. Benchmark DetailsOverview. Our proposed SimMBench focuses on superlative relations. Specifically, to sample an evaluation prompt, we first determine the number of objects in the prompt. Each prompt contains a minimum of one object and a maximum of four objects. Then, we sample the superlative relation for each object that has not yet been determined, where clouds right crown lion above \"The clouds on the right. A crown is on top of a lion.\" Semantic parsing. Assign the superlative boxes. Traverse the semantic tree for a global view. 4. Assign the relative boxes.", "figure_data": "", "figure_id": "fig_14", "figure_label": "313", "figure_type": "figure" }, { "figure_caption": "Figure 9 .Figure 10 .910Figure 9. Example of target layout generation.", "figure_data": "", "figure_id": "fig_15", "figure_label": "910", "figure_type": "figure" }, { "figure_caption": "Figure 11 .11Figure 11. Layout calibration results of images in different styles.", "figure_data": "", "figure_id": "fig_16", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 .12Figure12. Qualitative comparisons with LayoutDiffusion[47]. The generation quality of LayoutDiffusion is far worse than SimM.", "figure_data": "", "figure_id": "fig_17", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 13 .13Figure 13. Typical failure cases identified by human evaluators.", "figure_data": "", "figure_id": "fig_18", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 .14Figure 14. Additional qualitative comparisons.", "figure_data": "", "figure_id": "fig_19", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 .15Figure 15. Additional qualitative comparisons.", "figure_data": "", "figure_id": "fig_20", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "PromptSD Encoder Block_1128 x 128cross-attention mapSD Encoder Block_264 x 64cross-attention mapNEW cross-attention mapSD Encoder Block_3Time32 x 32SD Encoder Block_416 x 16SD Middle16 x 16SD Decoder Block_116 x 16SD Decoder Block_232 x 32SD Decoder Block_364 x 64SD Decoder Block_4128 x 128", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Quantitative comparisons with competing methods. The generation accuracy (%) and CLIP-Score on DrawBench[35] and our presented SimMBench are reported.", "figure_data": "DrawBench [35]SimMBenchMethodsAccuracy CLIP-Score Accuracy CLIP-ScoreStable Diffusion [32]12.500.32674.250.3012BoxDiff [40]30.000.323924.080.3032Layout-Guidance [6]36.500.335425.500.3020Attention-Refocusing [25] 43.500.333950.710.3017SimM (Ours)53.000.342365.160.3001", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Detailed quantitative results on SimMBench. The generation accuracy (%) is reported.", "figure_data": "Methods1 object 2 objects 3 objects 4 objectsStable Diffusion [32]15.565.210.000.00BoxDiff [40]41.1118.2319.6413.33Layout-Guidance [6]82.225.733.5720.00Attention-Refocusing [25] 65.5641.6757.1453.33SimM (Ours)82.2253.6476.7966.67", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" } ]
Biao Gong; Siteng Huang; Yutong Feng; Shiwei Zhang; Yuyuan Li; Yu Liu; Alibaba Group
[ { "authors": "Omri Avrahami; Thomas Hayes; Oran Gafni; Sonal Gupta; Yaniv Taigman; Devi Parikh; Dani Lischinski; Ohad Fried; Xi Yin", "journal": "", "ref_id": "b0", "title": "SpaText: Spatio-textual representation for controllable image generation", "year": "2023" }, { "authors": "Mohamed Eslam; Pengzhan Bakr; Xiaoqian Sun; Faizan Shen; Li Erran Farooq Khan; Mohamed Li; Elhoseiny", "journal": "", "ref_id": "b1", "title": "HRS-Bench: Holistic, reliable and scalable benchmark for text-to-image models", "year": "2023" }, { "authors": "Yogesh Balaji; Seungjun Nah; Xun Huang; Arash Vahdat; Jiaming Song; Karsten Kreis; Miika Aittala; Timo Aila; Samuli Laine; Bryan Catanzaro; Tero Karras; Ming-Yu Liu", "journal": "", "ref_id": "b2", "title": "eDiff-I: Text-to-image diffusion models with an ensemble of expert denoisers", "year": "2022" }, { "authors": "Chris Bussell; Ahmed Ehab; Daniel Hartle-Ryan; Timo Kapsalis", "journal": "", "ref_id": "b3", "title": "Generative AI for immersive experiences: Integrating text-to-image models in VR-mediated co-design workflows", "year": "2023" }, { "authors": "Hila Chefer; Yuval Alaluf; Yael Vinker; Lior Wolf; Daniel Cohen-Or", "journal": "", "ref_id": "b4", "title": "Attend-and-Excite: Attention-based semantic guidance for text-to-image diffusion models", "year": "2023" }, { "authors": "Minghao Chen; Iro Laina; Andrea Vedaldi", "journal": "", "ref_id": "b5", "title": "Training-free layout control with cross-attention guidance", "year": "2023" }, { "authors": "Guillaume Couairon; Marlène Careil; Matthieu Cord; Stéphane Lathuilière; Jakob Verbeek", "journal": "", "ref_id": "b6", "title": "Zero-shot spatial layout conditioning for text-to-image diffusion models", "year": "2023" }, { "authors": "Prafulla Dhariwal; Alexander Quinn; Nichol ", "journal": "", "ref_id": "b7", "title": "Diffusion models beat GANs on image synthesis", "year": "2021" }, { "authors": "Ming Ding; Zhuoyi Yang; Wenyi Hong; Wendi Zheng; Chang Zhou; Da Yin; Junyang Lin; Xu Zou; Zhou Shao; Hongxia Yang; Jie Tang", "journal": "", "ref_id": "b8", "title": "CogView: Mastering text-toimage generation via transformers", "year": "2021" }, { "authors": "Oran Gafni; Adam Polyak; Oron Ashual; Shelly Sheynin; Devi Parikh; Yaniv Taigman", "journal": "", "ref_id": "b9", "title": "Make-A-Scene: Scenebased text-to-image generation with human priors", "year": "2022" }, { "authors": "Taesung Songwei Ge; Jun-Yan Park; Jia-Bin Zhu; Huang", "journal": "", "ref_id": "b10", "title": "Expressive text-to-image generation with rich text", "year": "2023" }, { "authors": "Tejas Gokhale; Hamid Palangi; Besmira Nushi; Vibhav Vineet; Eric Horvitz; Ece Kamar; Chitta Baral; Yezhou Yang", "journal": "", "ref_id": "b11", "title": "Benchmarking spatial relationships in text-to-image generation", "year": "2022" }, { "authors": "Anisha Gunjal; Jihan Yin; Erhan Bas", "journal": "", "ref_id": "b12", "title": "Detecting and preventing hallucinations in large vision language models", "year": "2023" }, { "authors": "Amir Hertz; Ron Mokady; Jay Tenenbaum; Kfir Aberman; Yael Pritch; Daniel Cohen-Or", "journal": "", "ref_id": "b13", "title": "Prompt-to-prompt image editing with cross-attention control", "year": "2023" }, { "authors": "Jack Hessel; Ari Holtzman; Maxwell Forbes; Ronan Le Bras; Yejin Choi", "journal": "", "ref_id": "b14", "title": "CLIPScore: A reference-free evaluation metric for image captioning", "year": "2021" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "", "ref_id": "b15", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Matthew Honnibal; Mark Johnson", "journal": "", "ref_id": "b16", "title": "An improved nonmonotonic transition system for dependency parsing", "year": "2015" }, { "authors": "Yushi Hu; Benlin Liu; Jungo Kasai; Yizhong Wang; Mari Ostendorf; Ranjay Krishna; Noah A Smith", "journal": "", "ref_id": "b17", "title": "TIFA: accurate and interpretable text-to-image faithfulness evaluation with question answering", "year": "2023" }, { "authors": "P Diederik; Max Kingma; Welling", "journal": "", "ref_id": "b18", "title": "Auto-encoding variational bayes", "year": "2014" }, { "authors": "Yuheng Li; Haotian Liu; Qingyang Wu; Fangzhou Mu; Jianwei Yang; Jianfeng Gao; Chunyuan Li; Yong Jae Lee", "journal": "", "ref_id": "b19", "title": "GLIGEN: Open-set grounded text-to-image generation", "year": "2023" }, { "authors": "Long Lian; Boyi Li; Adam Yala; Trevor Darrell", "journal": "Transactions on Machine Learning Research", "ref_id": "b20", "title": "LLMgrounded diffusion: Enhancing prompt understanding of text-to-image diffusion models with large language models", "year": "2024" }, { "authors": "Chen-Hsuan Lin; Jun Gao; Luming Tang; Towaki Takikawa; Xiaohui Zeng; Xun Huang; Karsten Kreis; Sanja Fidler; Ming-Yu Liu; Tsung-Yi Lin", "journal": "", "ref_id": "b21", "title": "Magic3D: High-resolution text-to-3d content creation", "year": "2023" }, { "authors": "Alexander Quinn; Nichol ; Prafulla Dhariwal; Aditya Ramesh; Pranav Shyam; Pamela Mishkin; Bob Mcgrew; Ilya Sutskever; Mark Chen", "journal": "", "ref_id": "b22", "title": "GLIDE: Towards photorealistic image generation and editing with text-guided diffusion models", "year": "2022" }, { "authors": " Openai", "journal": "", "ref_id": "b23", "title": "", "year": "2023" }, { "authors": "Quynh Phung; Songwei Ge; Jia-Bin Huang", "journal": "", "ref_id": "b24", "title": "Grounded text-to-image synthesis with attention refocusing", "year": "2023" }, { "authors": "Ben Poole; Ajay Jain; Jonathan T Barron; Ben Mildenhall", "journal": "", "ref_id": "b25", "title": "DreamFusion: Text-to-3D using 2D diffusion", "year": "2023" }, { "authors": "Leigang Qu; Shengqiong Wu; Hao Fei; Liqiang Nie; Tat-Seng Chua", "journal": "", "ref_id": "b26", "title": "LayoutLLM-T2I: Eliciting layout guidance from LLM for text-to-image generation", "year": "2023" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark; Gretchen Krueger; Ilya Sutskever", "journal": "", "ref_id": "b27", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Aditya Ramesh; Mikhail Pavlov; Gabriel Goh; Scott Gray; Chelsea Voss; Alec Radford; Mark Chen; Ilya Sutskever", "journal": "", "ref_id": "b28", "title": "Zero-shot text-to-image generation", "year": "2021" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b29", "title": "Hierarchical text-conditional image generation with CLIP latents", "year": "2022" }, { "authors": "Scott E Reed; Zeynep Akata; Xinchen Yan; Lajanugen Logeswaran; Honglak Schiele; Lee", "journal": "", "ref_id": "b30", "title": "Generative adversarial text to image synthesis", "year": "2016" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b31", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Olaf Ronneberger; Philipp Fischer; Thomas Brox", "journal": "", "ref_id": "b32", "title": "U-Net: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "Nataniel Ruiz; Yuanzhen Li; Varun Jampani; Yael Pritch; Michael Rubinstein; Kfir Aberman", "journal": "", "ref_id": "b33", "title": "DreamBooth: Fine tuning text-to-image diffusion models for subject-driven generation", "year": "2023" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily L Denton; Seyed Kamyar; Seyed Ghasemipour; Raphael Gontijo Lopes; Burcu Karagol Ayan; Tim Salimans; Jonathan Ho; David J Fleet; Mohammad Norouzi", "journal": "", "ref_id": "b34", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily L Denton; Seyed Kamyar; Seyed Ghasemipour; Raphael Gontijo Lopes; Burcu Karagol Ayan; Tim Salimans; Jonathan Ho; David J Fleet; Mohammad Norouzi", "journal": "", "ref_id": "b35", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "", "ref_id": "b36", "title": "Denoising diffusion implicit models", "year": "2021" }, { "authors": "Sanjay Subramanian; William Merrill; Trevor Darrell; Matt Gardner; Sameer Singh; Anna Rohrbach", "journal": "", "ref_id": "b37", "title": "ReCLIP: A strong zero-shot baseline for referring expression comprehension", "year": "2022" }, { "authors": "Ming Tao; Hao Tang; Fei Wu; Xiaoyuan Jing; Bing-Kun Bao; Changsheng Xu", "journal": "", "ref_id": "b38", "title": "DF-GAN: A simple and effective baseline for text-to-image synthesis", "year": "2022" }, { "authors": "Jinheng Xie; Yuexiang Li; Yawen Huang; Haozhe Liu; Wentian Zhang; Yefeng Zheng; Mike Zheng Shou", "journal": "", "ref_id": "b39", "title": "BoxDiff: Text-to-image synthesis with training-free box-constrained diffusion", "year": "2023" }, { "authors": "Tao Xu; Pengchuan Zhang; Qiuyuan Huang; Han Zhang; Zhe Gan; Xiaolei Huang; Xiaodong He", "journal": "", "ref_id": "b40", "title": "AttnGAN: Finegrained text to image generation with attentional generative adversarial networks", "year": "2018" }, { "authors": "Han Xue; Zhiwu Huang; Qianru Sun; Li Song; Wenjun Zhang", "journal": "", "ref_id": "b41", "title": "Freestyle layout-to-image synthesis", "year": "2023" }, { "authors": "Zhengyuan Yang; Jianfeng Wang; Zhe Gan; Linjie Li; Kevin Lin; Chenfei Wu; Nan Duan; Zicheng Liu; Ce Liu; Michael Zeng; Lijuan Wang", "journal": "", "ref_id": "b42", "title": "ReCo: Region-controlled text-toimage generation", "year": "2022" }, { "authors": "Jiahui Yu; Yuanzhong Xu; Jing Yu Koh; Thang Luong; Gunjan Baid; Zirui Wang; Vijay Vasudevan; Alexander Ku; Yinfei Yang; Burcu Karagol Ayan; Ben Hutchinson; Wei Han; Zarana Parekh; Xin Li; Han Zhang; Jason Baldridge; Yonghui Wu", "journal": "Transactions on Machine Learning Research", "ref_id": "b43", "title": "Scaling autoregressive models for content-rich text-to-image generation", "year": "2022" }, { "authors": "Lvmin Zhang; Maneesh Agrawala", "journal": "", "ref_id": "b44", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" }, { "authors": "Zhiyuan Zhang; Zhitong Huang; Jing Liao", "journal": "", "ref_id": "b45", "title": "Continuous layout editing of single images with diffusion models", "year": "2023" }, { "authors": "Guangcong Zheng; Xianpan Zhou; Xuewei Li; Zhongang Qi; Ying Shan; Xi Li", "journal": "", "ref_id": "b46", "title": "Layoutdiffusion: Controllable diffusion model for layout-to-image generation", "year": "2023" }, { "authors": "Yiyang Zhou; Chenhang Cui; Jaehong Yoon; Linjun Zhang; Zhun Deng; Chelsea Finn; Mohit Bansal; Huaxiu Yao", "journal": "", "ref_id": "b47", "title": "Analyzing and mitigating object hallucination in large vision-language models", "year": "2023" }, { "authors": "Minfeng Zhu; Pingbo Pan; Wei Chen; Yi Yang", "journal": "", "ref_id": "b48", "title": "DM-GAN: Dynamic memory generative adversarial networks for text-to-image synthesis", "year": "2019" } ]
[ { "formula_coordinates": [ 3, 64.47, 693.39, 221.89, 16.11 ], "formula_id": "formula_0", "formula_text": "L = E z∼E(x),y,ϵ∼N (0,1),t ϵ -ϵ θ z t , t, y 2 2 . (1)" }, { "formula_coordinates": [ 4, 111.51, 77.37, 174.85, 28.18 ], "formula_id": "formula_1", "formula_text": "A l = Softmax Q l K l ⊤ √ d ,(2)" }, { "formula_coordinates": [ 4, 50.11, 185.43, 60.07, 14.59 ], "formula_id": "formula_2", "formula_text": "A l k ∈ R W l ×H l" }, { "formula_coordinates": [ 4, 412.49, 277.84, 132.13, 11.62 ], "formula_id": "formula_3", "formula_text": "b k = ( x k , y k , w k , h k ) ∈ [0, 1] 4" }, { "formula_coordinates": [ 4, 390.17, 514.27, 154.94, 31.41 ], "formula_id": "formula_4", "formula_text": "ĀT = 1 L L l=1 A T,l ,(3)" }, { "formula_coordinates": [ 5, 121.64, 377.2, 164.73, 32.02 ], "formula_id": "formula_5", "formula_text": "Ā = 1 T loc T t=T -T loc Āt .(4)" }, { "formula_coordinates": [ 5, 322.02, 445.22, 223.09, 29.11 ], "formula_id": "formula_6", "formula_text": "A t,l k (i, j) ← A t,l k (i, j) • α if (i, j) in b l k A t,l k (i, j) / α if (i, j) not in b l k ,(5)" }, { "formula_coordinates": [ 5, 371.42, 625.21, 173.69, 14.77 ], "formula_id": "formula_7", "formula_text": "M t,l k = 1 -Softmax(A t,l k ),(6)" }, { "formula_coordinates": [ 5, 325.16, 667.76, 219.95, 14.77 ], "formula_id": "formula_8", "formula_text": "A t,l g ← M t,l k ⊙ A t,l g , for g ∈ [1, N ] and g ̸ = k.(7)" }, { "formula_coordinates": [ 7, 326.2, 72.43, 7.75, 8.64 ], "formula_id": "formula_9", "formula_text": "P D x q G p V o Q h t E c a X b E T a U M 0 k b l l l O 2 7 G m W E S c t q L x 7 c x v P V F t m J J 1 O 4 l p K P B Q s g E j 2 D r p o f 6 Y c k W m N 0 G v W P L L / h x o l Q Q Z K U G G W q / 4" }, { "formula_coordinates": [ 7, 383.39, 72.43, 3.39, 6.78 ], "formula_id": "formula_10", "formula_text": "= \" > A A A B 8 X i c b V B N S w M x E J 2 t X 7 V + V T 1 6 C R b B U 9 m V o l 6 E g h e P F b p t s V 1 L N s 2 2 o d l k S b J C W f o v v H h Q x K v / x p v / x r T d g 7 Y + G H i 8 N 8 P M v D D h T B v X / X Y K a + s b m 1 v F 7 d L O 7 t 7 + Q f n w q K V l q g j 1 i e R S d" }, { "formula_coordinates": [ 8, 246.01, 72, 7.39, 7.39 ], "formula_id": "formula_11", "formula_text": "L Y T R V G E n L b C 0 e 3 U b z 1 R p V k s H 8 w 4 o Y H A g W Q R I 2 i s 9 N h F n g z x 5 s L r l c p e x Z v B X S Z + T s q Q o 9 4 r f X X 7 M U k F l Y Z w 1 L r j e 4 k J M l S G E U 4 n x W 6 q a Y J k h A P a s V S i o D r I Z g d P 3 F O r 9 N 0 o V r a k c W f q 7 4 k M h d Z j E d p O g W a o F 7 2 p + J / X S U 1 0 H W R M J q m h k s w X R S l 3 T e x O v 3 f 7 T F F i + N g S J I r Z W 1 0 y R I X E 2 I y K N g R / 8 e V l 0 j y v + J e V 6 n 2 1 X K v m c R T g G E 7 g D H y 4 g h r c Q R 0 a Q E D A M 7 z C m 6 O c F + f d + Z i" } ]
2023-12-05
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b34", "b25", "b47", "b60", "b11", "b16", "b41", "b49", "b28", "b63", "b67", "b9", "b62", "b9", "b62" ], "table_ref": [], "text": "The recent Segment Anything Model (SAM [35]) stands a significant milestone in image segmentation, attributed to Qi Fan (fanqics@gmail.com) done this work at Kuaishou Technology. Xin Tao (jiangsutx@gmail.com) is the corresponding author. its superior zero-shot generalization ability on new tasks and data distributions. Empowered by the billion-scale training masks and the promptable model design, SAM generalizes to various visual structures in diverse scenarios through flexible prompts, such as box, point, mask or text prompts. Facilitated by high-quality prompts, SAM has produced significant performance benefit for various impor-tant applications, such as healthcare [26,48], remote sensing [13,61], self-driving [12,17], agriculture [42,50], etc. Previous works mainly focus on improving SAM's segmentation performance assuming high-quality prompts are available, such as a tight bounding box (e.g., produced by SOTA detectors [29,64,68]) or sufficient points (e.g., 10 points) for the target object. However, in practice SAM or in fact interactive segmentation often encounters inaccurate or insufficient prompts, casually marked up by users as inaccurate box or very sparse points, especially in the crowdsourcing annotation platform. Such inaccurate prompts often mislead SAM to produce unstable segmentation results as shown in Figure 1. Unfortunately, however, this critical issue has been largely overlooked, even though the suboptimal prompts and the resulting segmentation stability problem are quite prevalent in practice .\nNote that there is no proper off-the-shelf solution for solving SAM's segmentation stability problem with inaccurate prompts. Simply finetuning SAM's mask decoder with imprecise prompts may easily lead to catastrophic forgetting, undermining the integrity of the highly-optimized SAM model and thus sacrificing the zero-shot segmentation generality. Although in the image domain deformable attention [10,63] has shown impressive efficacy on adaptively shifting the model attention to informative regions, which may naturally address the attention drift issue caused by the misleading prompts, a straightforward implementation of this idea can again compromise SAM's integrity.\nIn this paper we present the first comprehensive analysis on SAM's segmentation stability across a wide range of prompt qualities, with a particular focus on low-quality prompts such as imprecise bounding boxes or points. Our findings demonstrate that, when fed with imprecise prompts, the SAM's mask decoder is likely to be misguided to focus on the background or specific object parts, where the cross-attention module is inclined to aggregate and activate image features of these regions when mutually updating the prompt and image tokens. Such collaborative token updating mechanism usually suffers from attention drift, which is accumulated and propagated from the suboptimal prompt to the unsatisfactory segmentation results.\nTo address this issue, we present a novel deformable sampling plugin (DSP) with two key designs to improve SAM's stability while maintaining its zero-shot generality. Our key idea is to adaptively calibrate SAM's mask attention by adjusting the attention sampling positions and amplitudes, while keeping the original SAM model unchanged: 1) we employ a small offset network to predict the corresponding offsets and feature amplitudes for each image feature sampling locations, which are learned from the input image feature map; 2) then, we adjust the feature attention by resampling the deformable image features at the updated sampling locations for keys and values of the cross-attention module in SAM's mask decoder, keeping the original SAM model unchanged. In doing so, we can shift the feature sampling attention toward informative regions which is more likely to contain target objects, and meanwhile avoiding the potential model disruption of the original highly-optimized SAM. Finally, to effectively handle both the high-and low-quality prompts, we propose a dynamic routing module to toggle SAM between deformable and regular grid sampling modes. A simple and effective robust training strategy is proposed to facilitate our Stable-SAM to adapt to prompts of diverse qualities.\nThus, our method is unique in its idea and design on solely adjust the feature attention without involving the original model parameters. In contrast, the conventional deformable attention methods [10,63] updates the original network parameters, which is undesirable when adapting powerful foundation models, especially when finetuning large foundation models. Our method thus improves SAM's segmentation stability across a wide range of prompt qualities with minimal learnable paramters and fast adaptation, and meanwhile retains SAM's powerful promptable segmentation efficiency and generality." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b35", "b10", "b29", "b30", "b33", "b7", "b54", "b6", "b34", "b31", "b37", "b21", "b36", "b56", "b3", "b23", "b14", "b15", "b9", "b68", "b5", "b62", "b65", "b69" ], "table_ref": [], "text": "Improving Segmentation Quality. Researchers have proposed various methods to enhance the quality and accuracy of semantic segmentation methods. Early methods incorporate graphical models such as CRF [36] or region growing [11] as an additional post-processing stage, which are usually training-free. Many learning-based methods design new operators [30,31,34] or utilize additional refinement stage [8,55]. Recently, methods such as Mask2Former [7] and SAM [35] have been introduced, which address open-world segmentation by introducing prompt-based approaches. Along this line, a series of improvements [32,38] have been proposed, focusing on prompt-tuning and improving the accuracy of segmentation decoders. However, these methods overlook a crucial aspect, which is how to generate high-quality segmentation results in cases where the prompt is inaccurate. This is precisely the problem that our method aims to address. Tuning Foundation Models. Pretrained models have played an important role since the very beginning of deep learning [22,37,57]. Despite zero-shot generalization grows popular in foundation models of computer vision and natural language processing [4,5], tuning methods such as adapter [25] and prompt-based learning [24,25] have been proposed to generalize these models to downstream tasks [15,16]. These methods typically involves additional training parameters and time. We propose a new method that makes better use of existing features with minimal additional methods and can also produce competitive results. Deformable Attention. Deformable convolution [10,69] has been proved effective to help neural features attend to important spatial locations. Recently, it has also been extended to transformer-based networks [6,63,66,70]. Such deformed spatial tokens are especially suitable for our task, which requires dynamically attending to correct regions given inaccurate prompts. However, previous deformable layers involve both offset learning and feature learning after deformation. In this paper, we propose a new approach to adjust the feature attention by simply sampling and modulating the features using deformable operations, without the need to train subsequent layers." }, { "figure_ref": [], "heading": "SAM Stability Anlaysis", "publication_ref": [], "table_ref": [], "text": "In this section, we present a comprehensive investigation into the stability of SAM under prompts of varying quality." }, { "figure_ref": [], "heading": "Segmentation Stability Metric", "publication_ref": [], "table_ref": [], "text": "Prior segmentation studies have focused on achieving high prediction accuracy, gauged by the Intersection-over-Union (IoU) between the predicted and ground truth masks. This focus on high performance is justified as segmentation models typically produce deterministic masks for given input images, without requiring additional inputs.\nHowever, SAM's segmentation output depends on both the image and the prompts, with the latter often varying in quality due to different manual or automatic prompt generators. In practical applications of SAM, segmentation targets are typically clear and unambiguous, independent of prompt quality. For instance, in autonomous driving applications, the goal is to segment the entire car stably and consistently, regardless of whether the prompt-be it a point or a bounding box-initially focuses on a specific part such as the wheel or the car body.\nMotivated by this application requirement, we introduce the segmentation stability metric. Specifically, SAM is capable of producing a set of binary segmentation maps M ∈ R B×H×W for a single target object using B prompts of differing qualities. We define the segmentation stability (mSF) within the set as:\nS = 1 B B i=1 IoU(M i , M union ),(1)\nwhere IoU(M i , M union ) represents the Intersection-over-Union between the i-th segmentation map M i and the collective foreground region B i M i of all maps. This new metric assesses the consistency across segmentations in each prediction, serving as a reliable indicator of stability, even without access to the ground truth masks." }, { "figure_ref": [ "fig_1" ], "heading": "SAM Segmentation Instability", "publication_ref": [ "b31", "b50", "b39", "b45", "b66" ], "table_ref": [], "text": "We perform empirical studies to illustrate the segmentation instability of the current SAM with prompts of differing quality, thereby justifying our Stable SAM approach. Model and Evaluation Details. The released SAM is trained with crafted prompts on large-scale SA-1B dataset.\nWe evaluate the segmentation accuracy and stability of the ViT-Large based SAM with different prompt types and qualities, including box prompts with added noise (noise scale 0.4) and point prompts with varying numbers of points (1, 3, 5, 10 positive points randomly selected from the ground truth mask). For every input image and prompt type, we randomly select 20 prompts to compute their segmentation stability, average mask mIoU, and boundary mBIoU scores. The evaluation utilizes four segmentation datasets as in HQ-SAM [32]: DIS [51] (validation set), ThinObject-5K [40] (test set), COIFT [46], and HR-SOD [67]. Table 1 tabulates that SAM's segmentation accuracy and stability significantly decrease with low-quality prompts, such as imprecise box prompts or point prompts with minimal points. These analysis are performed on the four aforementioned segmentation datasets. The varying segmentation accuracy and stability indicates that SAM's mask decoder performs distinctly when dealing with prompts of varying qualities.\nWe visualize the image activation map for the token-toimage cross-attention in SAM's second mask decoder layer to better understand its response to low-quality prompts. We focus on the second mask decoder layer for visualization because its cross-attention is more representative, benefit- ing from the input tokens and image embedding collaboratively updated by the first mask decoder layer. Figure 2 demonstrates that an inaccurate box prompt causes SAM's mask decoder to miss regions of the target object while incorrectly incorporating features from the background, or focusing on specific object parts. It consequently leads to degraded segmentation accuracy and stability.\nOverall, the above empirical evidence suggests that SAM potentially suffers from the attention drift issue, where suboptimal prompts misleadingly shift attention from the target object to background areas or specific object parts, thereby compromising the accuracy and stability of the segmentation results. This motivates us to calibrate SAM's mask attention by leveraging learnable offsets to adjust the attention sampling position towards the target object regions, thus boosting segmentation accuracy and stability." }, { "figure_ref": [], "heading": "Stable Segment Anything Model", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [ "b34", "b62", "b62" ], "table_ref": [], "text": "We first revisit the recent Segment Anything Model (SAM) and deformable attention mechanism. Segment Anything Model. SAM [35] is a powerful promptable segmentation model. It comprises an image encoder for computing image embeddings, a prompt encoder for embedding prompts, and a lightweight mask decoder for predicting segmentation masks by combining the two information sources. The fast mask mask decoder is a twolayer transformer-based decoder to collaboratively update both the image embedding and prompt tokens via crossattention. SAM is trained on the large-scale SA-1B dataset. Deformable Attention. Deformable attention [63] is a mechanism that enables the model to focus on a subset of key sampling points instead of the entire feature space. This mechanism naturally addresses the attention shift problem in SAM caused by low-quality prompts.\nIn the standard self-attention, given a feature map x ∈ R H×W ×C , the attention weights are computed across all spatial locations within the feature map.\nIn the deformable attention [63], a uniform grid of points r ∈ R H G ×W G ×2 are first generated as the references 1 with the sampled image feature x r ∈ R H G ×W G ×C . Subsequently, a convolutional offset network θ offset predicts the offset ∆r = θ offset (x r ) for each reference point. The new feature sampling locations are given by r + ∆r ∈ R H G ×W G ×2 . The resampled deformable image features x r+∆r ∈ R H G ×W G ×C are then utilized as the key and value features in the attention module.\nNote that conventional deformable attention optimizes both the offset network and attention module. Thus directly applying deformable attention to SAM is usually suboptimal, because altering SAM's original network or weights, e.g., substituting SAM's standard attention with deformable attention and retraining, may compromise its integrity." }, { "figure_ref": [ "fig_2" ], "heading": "Deformable Sampling Plugin", "publication_ref": [ "b69" ], "table_ref": [], "text": "To address the attention drift issue while preserving the SAM's integrity, we propose a novel deformable sampling plugin (DSP) module on top of SAM's original token-toimage cross-attention module, as shown in Figure 3.\nSpecifically, given the prompt token feature t ∈ R T×C and image feature x p ∈ R H×W ×C , the token-to-image cross-attention is:\nCAttn(t, x) = σ(Q(t) • K(x p ) T ) • V (x p ),(2)\nwhere p ∈ R H×W ×2 represents the image feature spatial sampling locations, σ denotes the softmax function, and Q, K, V are the query, key, and value embedding projection functions, respectively Our DSP adaptively calibrate the feature attention by adjusting solely image feature sampling locations and amplitudes without altering the original SAM model. Specifically, we utilize an offset network θ offset to predict the feature sampling offset ∆p ∈ R H×W ×2 , akin to that in deformable attention:\n∆p = θ s (θ offset (x p )),(3)\nwhere θ s is a scale function s p • tanh( * ) to prevent too large offset, and s p is a pre-defined scale factor. The offset network θ offset consists of a 1×1 convolution, a 5×5 depthwise convolution with the layer normalization and GELU activation, and a 1 × 1 convolution. The updated feature sampling locations are p + ∆p. The numerical range of both p and p+∆p lies in {(0, 0), ..., (H-1, W -1)}, which is then normalized to the range [-1, 1] for feature sampling. The feature amplitudes are predicted by the first convolutional layer and the image features x p are thus updated as x ⋆ p , which are used solely for computing the feature attention.\nSubsequently, we resample and modulate deformable image features x ⋆ p+∆p ∈ R H×W ×C at the updated sampling locations p + ∆p with the learned feature amplitudes for keys and values. Thus, our DSP calibrates the token-toimage cross-attention of SAM's mask decoder as:\nDCAttn(t, x) = σ(Q(t) • K(x ⋆ p+∆p ) T ) • V (x ⋆ p+∆p ).(4)\nAs p + ∆p is fractional, we apply a bilinear interpolation to compute x ⋆ p+∆p as in Deformable DETR [70]. Note that our DSP only trains the deformable offset network to predict new feature sampling locations p + ∆p and feature amplitudes, and feeds the resampled and modulated deformable features x ⋆ p+∆p to SAM's cross-attention module. Thus, the original SAM model remains unchanged." }, { "figure_ref": [], "heading": "Dynamic Routing Plugin", "publication_ref": [], "table_ref": [], "text": "While our DSP can effectively handle suboptimal and even erroneous prompts, by redirecting SAM's attention to informative regions which are more likely to contain the target objects, high-quality prompts can typically direct the model's attention correctly to target regions. Thus, it is essential to properly control the DSP's activation to prevent unwanted attention shifts.\nTo address this issue, we propose a novel dynamic routing plugin (DRP) that regulates the degree of DSP activation based on the input prompt quality. The DRP can be formulated as follows:\nα = σ(MLP(t o )) • s,(5)\nwhere t o ∈ R 1×C is the prompt token feature corresponding to the output mask, MLP refers to a small MLP network that includes an MLP layer with LayerNorm and GELU activation, as well as an output MLP layer; s denotes a learnable scale and σ denotes the softmax function.\nWe utilize the predicted values of α = [α 1 , α 2 ] ∈ R 1×2 to adaptively route SAM between DSP and original SAM's attention mechanism. Consequently, the token-to-image cross-attention output O(t, x) can be formulated as:\nO(t, x) = CAttn(t, α 1 • x ⋆ p+∆p + α 2 • x p )(6)\nThis soft dynamic routing strategy allows SAM to benefit from both DSP and its original zero-shot generality, contingent upon the quality of the prompt." }, { "figure_ref": [], "heading": "Robust Training Strategy", "publication_ref": [ "b31" ], "table_ref": [], "text": "We propose a simple and effective robust training strategy (RTS) to assist our model to learn how to correct SAM's attention when adversely affected by bad prompts.\nRobust Training Against Inaccurate Prompts. SAM's training, including HQ-SAM [32], typically utilizes highquality prompts given by precise bounding boxes or multiple points to accurately identify the target object. To address inaccurate prompts, our RTS incorporates prompts of varying qualities during training. These prompts include groundtruth boxes, box prompts with added noise (noise scale 0.4), and point prompts with varying numbers of points (1, 3, 10 positive points randomly chosen from the ground truth mask).\nRobust Training Against Ambiguous Prompts. In real segmentation scenarios, target objects often occur in cluttered environment, either occluding others or being occluded. Even given an accurate, tight bounding box, objects other than the target object will be enclosed. On the other hand, target objects are typically unambiguous even other objects are enclosed. For instance, in MS COCO, beds (occluded by quilt) are consistently regarded as target objects; the model must accurately segment the entire bed including accessories such as pillows and bedding. Thus, SAM's original ambiguity-aware solution, which predicts multiple masks for a single prompt, is generally suboptimal in welldefined realistic applications. To address such \"ambiguous\" prompts, our RTS incorporates synthetic occlusion images to make SAM conducive to accurately segment target objects. The occlusion images are synthesized by randomly introducing other objects to simulate \"occluder\" and \"occludee\" relationship.\nOur RTS is general and applicable to various SAM variants to improve their segmentation stability. Notably, our Stable-SAM with DSP and DRP experience the most substantial improvements from the application of RTS. " }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b31", "b50", "b39", "b45", "b66", "b40", "b70", "b28", "b67" ], "table_ref": [], "text": "Datasets. For fair comparison we keep our training and testing datasets same as HQ-SAM [32]. Specifically, we train all models on HQSeg-44K dataset, and evaluate their performance on four fine-grained segmentation datasets, including DIS [51] (validation set), ThinObject-5K [40] (test set), COIFT [46] and HR-SOD [67]. Furthermore, we validate the model's zero-shot generalization ability on two challenging segmentation benchmarks, including COCO [41], and SGinW [71], where SGinW contains 25 zero-shot in-the-wild segmentation datasets. More experimental results are included in the supplementary material. Input Prompts. We evaluate model's accuracy and stability with prompts of differing type and quality, as described in Sec. 3.2. For MS COCO and SGinW, we do not use the boxes generated by SOTA detectors [29,68] as the box prompt. This is because their predicted boxes are typically of high quality and cannot effectively evaluate the model's segmentation stability in the presence of inaccurate boxes. Instead, we introduce random scale noises into the ground truth boxes to generate noisy boxes as the prompts. Specifically, to simulate inaccurate boxes while still having some overlap with the target object, we select noisy boxes that partially overlap with the ground truth boxes with IoU ranges of 0.5-0.6 and 0.6-0.7. We also evaluate our method using the box prompts generated by SOTA detectors. Evaluation Metrics. We select suitable evaluation metrics depending on testing datasets, i.e., 1) mask mIoU, boundary mBIoU and mSF for DIS, ThinObject-5K, COIFT, and HR-SOD; 2) mask mAP and mAP 50 for COCO and SGinW." }, { "figure_ref": [], "heading": "Comparison with SAM Variants", "publication_ref": [], "table_ref": [], "text": "We compare our method with SAM and three powerful SAM variants. HQ-SAM is a recent powerful SAM variant for producing high-quality masks. We also try two simple SAM variants by finetuning its mask decoder and the prompt token, i.e., DT-SAM and PT-SAM, respectively. Although, in most practical applications, users prefer minimal interaction with clear and consistent segmentation targets. Our method maintains much better performance and stability when handling ambiguous one-point prompt, owing to our deformable feature sampling and robust training strategy against ambiguity. When point prompts increase to 3, all methods performs much better, while other methods still under-perform compared with ours." }, { "figure_ref": [], "heading": "Generalization Comparison on MS COCO and SGinW.", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "Table 3 presents the segmentation accuracy and stability when the models are generalized to MS COCO and SGinW with noisy box prompts. Note that the DT-SAM performs the worst, probably due to overfitting on the training set, which compromises its ability to generalize to new datasets. Our method consistently surpasses all competitors, particularly in handling inaccurate boxes (N-Box 0.5-0.6), where all noisy boxes have an IoU range of 0.5-0.6 with the ground truth boxes. Note that our method has a minimal number of extra learnable parameters (0.08M) and can be quickly adapted to new datasets by just one training epoch." }, { "figure_ref": [], "heading": "Comparison Based on Detector Predicted Box Prompts.", "publication_ref": [ "b63", "b67", "b69" ], "table_ref": [], "text": "Existing zero-shot segmentation methods typically choose powerful object detection model to generate high-quality boxes as the input prompts, such as FocalNet-L-DINO [64,68]. We also evaluate our method in such setting. Table 4 presents that our model achieves comparable performance as SAM and PT-SAM when using the FocalNet-L-DINO generated high-quality boxes as prompts. When using the R50-H-Deformable-DETR [70] as the box prompt generator, our method achieves comparable performance as HQ-SAM. Note that training and implementing SOTA detectors typically require large computational resources and the cross-domain generalization is still very challenging. In practice, users tend to leverage interactive tools to annotate objects for their personalized datasets. Our method substantially surpasses other competitors in such scenario, when the box can roughly indicate the target object." }, { "figure_ref": [ "fig_3" ], "heading": "Analysis on Stable-SAM", "publication_ref": [], "table_ref": [ "tab_5", "tab_5", "tab_6", "tab_6", "tab_7" ], "text": "Deformable Sampling Plugin. Table 5 shows DSP can be trained with high-quality prompts (without RTS) to improve the performance and stability on low-quality prompts, although the model still exhibits some instability. When equipped with RTS, DSP can effectively learn to shift SAM's attention to target objects when subjecting to inaccurate prompts. To delve deeper into the deformable sampling mechanism, we visualize the sampled feature points and their corresponding attention weights. Figure 4 trates how our DSP effectively shifts model's attention to the target object, resulting in increased attention weights. Consequently, the cross-attention module aggregates more target object features into the prompt tokens, thereby improving the segmentation quality of the target objects. Dynamic Routing Plugin. We leverage DSP to dynamically route the model between the regular and deformable feature sampling modes, conditioned on the input prompt quality. We find that DRP tends to route more DSP features when dealing with worse prompts. The DSP routing weight α 1 is increased from 0.469 to 0.614 when we change the point prompt from three points to one point. It indicates that lower-quality prompts rely more on DSP features to shift attention to the desirable regions. Table 5 shows that DRP can further improve model's performance, especially when handling the challenging one-point prompt scenario. Robust Training Strategy. Robust training is critical for improving model's segmentation stability, but is usually overlooked in previous works. RTS can guide the model, including our DSP, to accurately segment target objects even when provided with misleading low-quality prompts. Table 6 shows that RTS substantially improves the segmentation stability of all the methods, albeit with a slight compromise in performance when dealing with high-quality prompts. Note that our Stable-SAM benefits the most from the application of RTS, which can be attributed to our carefully designed deformable sampling plugin design. Model Scalability. Our method solely calibrates SAM's mask attention by adjusting model's feature sampling locations and amplitudes using a minimal number of learnable parameters (0.08 M), while keeping the model architecture and parameters intact. This plugin design grants our method with excellent model scalability. Table 6 shows that our model can be rapidly optimized by just one training epoch, achieving comparable performance and stability. By scaling the training procedure to 12 epochs, our method achieves the best performance across all prompting settings. Additionally, our method can cooperate with other SAM variants. For instance, when combined with HQ-SAM, the performance and stability are further improved. Low-Shot Generalization. Customized datasets with mask annotation are often limited, typically consisting of only hundreds of images. For a fair comparison, all methods in 7 shows that HQ-SAM performs worst when trained with a limited number of images (220 or 440 images), which can be attributed to its potential overfitting problem caused by the relatively large learnable model parameters (5.1 M). In contrast, PT-SAM's better performance with minimal learnable parameters (0.13 M) further validates this hypothesis.\nOur plugin design, coupled with minimal learnable parameters, enables effective low-shot generalization, and thus achieves the best performance in such scenario." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we present the first comprehensive analysis on SAM's segmentation stability across a wide range of prompt qualities. " }, { "figure_ref": [], "heading": "MESS", "publication_ref": [ "b2", "b34", "b31", "b57" ], "table_ref": [ "tab_11" ], "text": "The recently released Multi-domain Evaluation of Semantic Segmentation (MESS) [3] is a large-scale benchmark for holistic analysis of zero-shot segmentation performance. MESS consists of 22 downstream tasks, a total of 448 classes, and 25079 images, covering a wide range of domain-specific datasets in the fields of earth monitoring, medical sciences, engineering, agriculture and biology and other general domains. We evaluate SAM [35], HQ-SAM [32] and our Stable-SAM on MESS benchmark using the official MESS evaluation code, and report the mean of class-wise intersection over union (mIoU). Following MESS's model settings, our Stable-SAM selects the first mask of the predicted multiple masks as the output. For a fair comparison, our Stable-SAM follows HQ-SAM to fuse the SAM's original prediction map into our predicted segmentation map. We provide four prompt types for evaluation. The oracle point refers to a single point sampled from the ground-truth mask using the point sampling approach RITM [58]. The random point refers to a single point randomly sampled from the ground-truth mask of the target object. The oracle box refers to a single box tightly enclosing the ground-truth mask of the target object. The noisy box refers to a single box generated by adding noise (noise scale 0.4) to the oracle box.\nTable 8 tabulates the zero-shot semantic segmentation performance comparison on MESS. Our Stable-SAM performs best when prompted with oracle point, random point and noisy box, and achieves comparable performance when provided with oracle box. Our competitive performance on the large-scale MESS benchmark further consolidates the powerful zero-shot generalization ability inherent in our Stable-SAM. Table 9 shows the dataset and comparison details on 22 tasks of MESS benchmark. Our Stable-SAM performs best on 19 out of 22 datasets. " }, { "figure_ref": [], "heading": "Backbone Variants", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Relation to Other Methods", "publication_ref": [ "b9", "b62", "b22", "b61" ], "table_ref": [ "tab_13", "tab_13" ], "text": "Deformable Attention. Our method is unique in its idea and design on solely adjusting the feature sampling locations and amplitudes by training the offset network, without involving the original model parameters. In contrast, conventional deformable attention methods [10,63] train both the offset network and original network parameters, which is undesirable when adapting powerful foundation models in deployment, especially in finetuning large foundation models. Figure 5 attention.\nWe apply the conventional deformable attention in our Stable-SAM by finetuning the mask decoder during training. Table 11 shows that the conventional deformable attention (Stable-SAM (finetuning decoder)) exhibits the worst generalization ability on MS COCO, even worse than the original SAM model. This further validates the necessity and better performance of our deformable sampling plugin paradigm, i.e., adapting the foundation model by only adjusting the feature sampling locations and amplitudes, while fixing the original model features and parameters. Spatial Attention. The spatial attention [23,62] the image spatial feature weights, and thus can be regarded as a soft feature sampling method. We directly replace DSP with spatial attention in our Stable-SAM to investigate if spatial attention offers comparable effectiveness. Table 11 shows that spatial attention performs much worse than our DSP, although it consistently improves the segmentation performance and stability on all datasets. This indicates that simply adjusting the feature weights is insufficient to adapt SAM for handling suboptimal prompts." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b18", "b48" ], "table_ref": [], "text": "During training, we only train DSP and DRP on HQSeg-44K dataset while fixing the model parameters of the pretrained SAM model. We train Stable-SAM on 8 NVIDIA Tesla V100 GPUs with a total batch size of 32, using Adam optimizer with zero weight decay and 0.001 learning rate. The training images are augmented using large-scale jittering [19]. The input prompts are randomly sampled from mixed prompt types, including ground truth bounding boxes, randomly sampled points (1, 3, 5, 10 positive points randomly selected from the ground truth mask), noisy boxes (generated by adding noise (noise scale 0.4) to the ground truth bounding boxes, where we ensure the generated noisy boxes have at least 0.5 overlap IoU with the ground truth boxes), and coarse masks (generated by adding Gaussian noise in the boundary regions of the ground truth masks). The model is optimized using cross entropy loss and dice loss [49].\nWe follow the same inference pipeline of the original SAM. The mask decoder first predicts a small mask in 256 × 256 spatial resolution for each prompt, which is then up-sampled to the original resolution 1024 × 1024 as the output mask." }, { "figure_ref": [ "fig_4" ], "heading": "Stability Visualization", "publication_ref": [], "table_ref": [], "text": "Figure 6-16 show extensive visualization comparisons between SAM and Stable-SAM, under box, 3-points and 1point prompts of diverse qualities. We also visualize the image activation map for the token-to-image cross-attention in SAM's second mask decoder layer to better understand its response to low-quality prompts. The important features are highlighted by the orange circles, with larger radius indicating higher attention score. SAM yields unsatisfactory segmentation results when provided with low-quality prompts, and even a minor prompt modification leads to unstable segmentation output. In contrast, our Stable-SAM produces consistent and accurate mask predictions even under prompts of diverse qualities, by shifting more feature sampling attention to the target object. " }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Domain", "publication_ref": [ "b64", "b52", "b33" ], "table_ref": [], "text": "Sensor type Mask size # Classes # Images Task SAM HQ Ours BDD100K [65] General Visible spectrum Medium 19 (Medium) 1,000 Driving 48.9 43.66 51.84 Dark Zurich [53] Visible spectrum Medium 20 (Medium) 50 Driving 54. 34 " } ]
The Segment Anything Model (SAM) achieves remarkable promptable segmentation given high-quality prompts which, however, often require good skills to specify. To make SAM robust to casual prompts, this paper presents the first comprehensive analysis on SAM's segmentation stability across a diverse spectrum of prompt qualities, notably imprecise bounding boxes and insufficient points. Our key finding reveals that given such low-quality prompts, SAM's mask decoder tends to activate image features that are biased towards the background or confined to specific object parts. To mitigate this issue, our key idea consists of calibrating solely SAM's mask attention by adjusting the sampling locations and amplitudes of image features, while the original SAM model architecture and weights remain unchanged. Consequently, our deformable sampling plugin (DSP) enables SAM to adaptively shift attention to the prompted target regions in a data-driven manner. During inference, dynamic routing plugin (DRP) is proposed that toggles SAM between the deformable and regular grid sampling modes, conditioned on the input prompt quality. Thus, our solution, termed Stable-SAM, offers several advantages: 1) improved SAM's segmentation stability across a wide range of prompt qualities, while 2) retaining SAM's powerful promptable segmentation efficiency and generality, with 3) minimal learnable parameters (0.08 M) and fast adaptation. Extensive experiments validate the effectiveness and advantages of our approach, underscoring Stable-SAM as a more robust solution for segmenting anything.
Stable Segment Anything Model
[ { "figure_caption": "Figure 1 .1Figure 1. The top row shows a performance comparison among SAM, HQ-SAM and our Stable-SAM, when provided with suboptimal prompts. Our Stable-SAM consistently surpasses other methods across prompts of different quality, demonstrating better or comparable performance to the SAM prompted by ground truth box. The bottom row displays the predicted masks and sampled important image features of SAM and Stable-SAM, with orange circles representing the attention weights, where a larger radius indicates a higher score. (a) SAM yields satisfactory segmentation results when provided with a high-quality box prompt. (b) Even a minor prompt modification leads to unstable segmentation output. SAM incorrectly segments the background, where the inaccurate box prompt misleads SAM to spend more attention to the background. (c) Our Stable-SAM accurately segments the target object by shifting more feature sampling attention to it.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. SAM performs badly when dealing with suboptimal prompts. This is mainly caused by the undesirable feature attention, focusing on the background or specific object parts. (The important features are highlighted by the orange circles, with larger radius indicating higher attention score. Please zoom-in on color screen for better visualization.)", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. (a) An illustration of our deformable sampling plugin (DSP) and deformable routing plugin (DRP) in SAM's mask decoder transformer. DSP employs a small (b) offset network to predict the feature sampling offsets and amplitudes. Subsequently, DSP calibrates the feature attention by resampling deformable image features at the updated sampling locations, and feeds them into SAM's token-toimage attention. DRP employs a small (c) MLP network to regulate the degree of DSP activation based on the input prompt quality. Note that our DSP adaptively calibrates solely SAM's mask attention without altering the original SAM model.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Visual results for box prompts (1st and 2nd row), for 1-point prompt (3rd) and 3-points prompt (4th). Within each image group in the first two rows, the three figures represent the results of SAM with GT box prompt, SAM with noisy box prompt, and Stable-SAM with noisy box prompt, respectively. The last two rows display the results of SAM and Stable-SAM with point prompts.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Visual results for box prompts. Within each image pair given the same prompt (green box), the subfigures represent the results of SAM and Stable-SAM, respectively.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Visual results for box prompts.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Visual results for box prompts.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 .9Figure 9. Visual results for 3-points prompt.", "figure_data": "", "figure_id": "fig_7", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 .10Figure 10. Visual results for 3-points prompt.", "figure_data": "", "figure_id": "fig_8", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 .11Figure 11. Visual results for 3-points prompt.", "figure_data": "", "figure_id": "fig_9", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 .12Figure 12. Visual results for 1-point prompt.", "figure_data": "", "figure_id": "fig_10", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 13 .13Figure 13. Visual results for 1-point prompt.", "figure_data": "", "figure_id": "fig_11", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 .14Figure 14. Visual results for 1-point prompt.", "figure_data": "", "figure_id": "fig_12", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 .15Figure 15. Visual results for 1-point prompt.", "figure_data": "", "figure_id": "fig_13", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Figure 16 .16Figure 16. Visual results for 1-point prompt.", "figure_data": "", "figure_id": "fig_14", "figure_label": "16", "figure_type": "figure" }, { "figure_caption": "Comparison on four HQ datasets among SAM, DT-SAM (finetuning SAM's mask decoder), PT-SAM (finetuning SAM's prompt token and its corresponding output MLP layer), HQ-SAM and our Stable-SAM, under prompts of varying quality.", "figure_data": "Noisy Box1 Point3 PointsModelEpoch mIoU mBIoU mSF mIoU mBIoU mSF mIoU mBIoU mSFSAM (baseline)-48.842.139.543.337.445.178.769.579.3DT-SAM1270.660.464.043.143.237.980.371.680.5PT-SAM1270.860.264.143.042.938.380.171.880.4HQ-SAM [32]1272.462.865.543.244.637.481.873.781.4Stable-SAM182.374.182.376.968.471.184.075.884.9MS COCOSGinWN-Box (0.5-0.6) N-Box (0.6-0.7) N-Box (0.5-0.6) N-Box (0.6-0.7) LearnableModelEpoch mAP mAP 50 mAP mAP 50 mAP mAP 50 mAP mAP 50Params FPSSAM (baseline)-27.360.240.975.026.060.839.573.2(1191 M) 5.0DT-SAM1212.222.715.828.710.421.513.627.13.9 M5.0PT-SAM1230.263.441.376.532.166.441.174.30.13 M 5.0HQ-SAM [32]1231.965.542.977.133.668.442.275.95.1 M4.8Stable-SAM144.876.450.581.143.375.648.679.40.08 M 5.0", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparison on MS COCO with the box prompts generated by SOTA detectors or noisy box prompts.", "figure_data": "ModelmAPmAP50mAP mAP50 mAP mAP50SAM48.575.341.563.727.3 60.2PT-SAM 48.675.541.764.230.2 63.4HQ-SAM 49.575.742.464.531.9 65.5Ours48.374.842.264.044.8 76.4", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "illus-Ablation study on deformable sampling plugin (DSP), dynamic routing plugin (DRP) and robust training strategy (RTS).", "figure_data": "Noisy Box1 PointModelmIoU mBIoU mSF mIoU mBIoU mSFSAM (baseine)48.8 42.1 39.5 43.3 37.4 45.1+ DSP69.9 60.2 67.2 46.8 40.8 48.0+ DSP + RTS81.7 73.5 81.6 75.9 67.5 70.6+ DSP + DRP + RTS 82.3 74.1 82.3 76.9 68.4 71.1", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Model scalability study.", "figure_data": "Groundtruth BoxNoisy BoxModelmIoU mBIoU mIoU mBIoU mSFSAM (baseine)79.571.148.842.139.5Without RTS:PT-SAM87.679.770.660.464.0HQ-SAM89.181.872.462.865.5Ours (1 epoch)87.480.069.660.066.5Ours (12 epochs) 89.182.172.763.267.4With RTS:PT-SAM86.878.482.173.178.7HQ-SAM87.479.882.974.580.4Ours (1 epoch)86.078.482.374.182.3Ours (12 epochs) 87.480.184.476.785.2HQ-SAM + Ours 88.781.586.178.786.3Noisy Box1 PointModelmIoU mBIoU mSF mIoU mBIoU mSFSAM (baseine)48.842.1 39.5 43.337.4 45.1220 train images:PT-SAM77.667.7 72.6 71.863.2 73.0HQ-SAM73.562.3 67.7 71.362.6 72.4Ours78.870.0 78.9 73.064.7 74.5440 train images:PT-SAM78.669.0 74.4 76.267.4 75.0HQ-SAM77.467.1 75.6 74.664.6 71.9Ours81.673.5 82.6 79.871.5 82.5", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Low-shot generalization comparison. All models are trained with RTS by 12 training epochs.", "figure_data": "", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "are trained with RTS by 12 training epochs. Table", "figure_data": "", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "tabulates the performance comparison on different backbone variants. Our Stable-SAM consistently performs better than other methods on all backbone variants.", "figure_data": "", "figure_id": "tab_10", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Comparison on Multi-domain Evaluation of Semantic Segmentation (MESS) benchmark, consisting of 22 downstream datasets,", "figure_data": "shows the difference between ourdeformable sampling plugin and conventional deformable", "figure_id": "tab_11", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Comparison on MS COCO and four HQ datasets for different backbone variants.", "figure_data": "", "figure_id": "tab_12", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Comparison on MS COCO and four HQ datasets for different Stable-SAM variants. The \"finetuning decoder\" denotes finetuning the mask decoder when training Stable-SAM.Figure5. Method difference between our deformable sampling plugin and conventional deformable attention.", "figure_data": "can adjust", "figure_id": "tab_13", "figure_label": "11", "figure_type": "table" } ]
Qi Fan; Xin Tao; Lei Ke; Mingqiao Ye; Yuan Zhang; Pengfei Wan; Zhongyuan Wang; Yu-Wing Tai; Chi-Keung Tang
[ { "authors": "Dina Bashkirova; Mohamed Abdelfattah; Ziliang Zhu; James Akl; Fadi Alladkani; Ping Hu; Vitaly Ablavsky; Berk Calli; Sarah Adel Bargal; Kate Saenko", "journal": "", "ref_id": "b0", "title": "Zerowaste dataset: towards deformable object segmentation in cluttered scenes", "year": "2022" }, { "authors": "Eric Bianchi; Matthew Hebdon", "journal": "", "ref_id": "b1", "title": "Corrosion condition state semantic segmentation dataset", "year": "2021" }, { "authors": "Benedikt Blumenstiel; Johannes Jakubik; Hilde Kühne; Michael Vössing", "journal": "", "ref_id": "b2", "title": "What a MESS: Multi-Domain Evaluation of Zero-shot Semantic Segmentation", "year": "2023" }, { "authors": "Rishi Bommasani; Drew A Hudson; Ehsan Adeli; Russ Altman; Simran Arora; Sydney Von Arx; Jeannette Michael S Bernstein; Antoine Bohg; Emma Bosselut; Brunskill", "journal": "", "ref_id": "b3", "title": "On the opportunities and risks of foundation models", "year": "2021" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "NeurIPS", "ref_id": "b4", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Zhiyang Chen; Yousong Zhu; Chaoyang Zhao; Guosheng Hu; Wei Zeng; Jinqiao Wang; Ming Tang", "journal": "ACM MM", "ref_id": "b5", "title": "Dpt: Deformable patch-based transformer for visual recognition", "year": "2021" }, { "authors": "Bowen Cheng; Ishan Misra; Alexander G Schwing; Alexander Kirillov; Rohit Girdhar", "journal": "", "ref_id": "b6", "title": "Masked-attention mask transformer for universal image segmentation", "year": "2022" }, { "authors": "Jihoon Ho Kei Cheng; Yu-Wing Chung; Chi-Keung Tai; Tang", "journal": "", "ref_id": "b7", "title": "Cascadepsp: Toward class-agnostic and very highresolution segmentation via global and local refinement", "year": "2020" }, { "authors": "Nadav Cohen; Yael Newman; Ariel Shamir", "journal": "Computer Graphics Forum", "ref_id": "b8", "title": "Semantic segmentation in art paintings", "year": "2022" }, { "authors": "Jifeng Dai; Haozhi Qi; Yuwen Xiong; Yi Li; Guodong Zhang; Han Hu; Yichen Wei", "journal": "", "ref_id": "b9", "title": "Deformable convolutional networks", "year": "2017" }, { "authors": "Ambrozio Philipe; Henry Dias; Medeiros", "journal": "", "ref_id": "b10", "title": "Semantic segmentation refinement by monte carlo region growing of high confidence detections", "year": "2019" }, { "authors": "Atharva Dikshit; Alison Bartsch; Abraham George; Amir Barati; Farimani ", "journal": "", "ref_id": "b11", "title": "Robochop: Autonomous framework for fruit and vegetable chopping leveraging foundational models", "year": "2023" }, { "authors": "Lei Ding; Kun Zhu; Daifeng Peng; Hao Tang; Haitao Guo", "journal": "", "ref_id": "b12", "title": "Adapting segment anything model for change detection in hr remote sensing images", "year": "2023" }, { "authors": "Seyed Mohammad; Hassan Erfani; Zhenyao Wu; Xinyi Wu; Song Wang; Erfan Goharian", "journal": "Environmental Modelling & Software", "ref_id": "b13", "title": "Atlantis: A benchmark for semantic segmentation of waterbody images", "year": "2022" }, { "authors": "Qi Fan; Wei Zhuo; Chi-Keung Tang; Yu-Wing Tai", "journal": "", "ref_id": "b14", "title": "Fewshot object detection with attention-rpn and multi-relation detector", "year": "2020" }, { "authors": "Qi Fan; Wenjie Pei; Yu-Wing Tai; Chi-Keung Tang", "journal": "", "ref_id": "b15", "title": "Selfsupport few-shot semantic segmentation", "year": "2022" }, { "authors": "Qi Fan; Mattia Segu; Yu-Wing Tai; Fisher Yu; Chi-Keung Tang; Bernt Schiele; Dengxin Dai", "journal": "ICLR", "ref_id": "b16", "title": "Towards robust object detection invariant to real-world domain shifts", "year": "2022" }, { "authors": "Muhammad Moazam Fraz; Paolo Remagnino; Andreas Hoppe; Bunyarit Uyyanonvara; Christopher G Alicja R Rudnicka; Sarah A Owen; Barman", "journal": "IEEE Transactions on Biomedical Engineering", "ref_id": "b17", "title": "An ensemble classification-based approach applied to retinal blood vessel segmentation", "year": "2012" }, { "authors": "Golnaz Ghiasi; Yin Cui; Aravind Srinivas; Rui Qian; Tsung-Yi Lin; Ekin D Cubuk; Quoc V Le; Barret Zoph", "journal": "", "ref_id": "b18", "title": "Simple copy-paste is a strong data augmentation method for instance segmentation", "year": "2021" }, { "authors": "Rahul Ghosh; Praveen Ravirathinam; Xiaowei Jia; Ankush Khandelwal; David Mulla; Vipin Kumar", "journal": "ICBD", "ref_id": "b19", "title": "Calcrop21: A georeferenced multi-spectral dataset of satellite imagery and crop labels", "year": "2021" }, { "authors": "Sebastian Haug; Jörn Ostermann", "journal": "", "ref_id": "b20", "title": "A crop/weed field image dataset for the evaluation of computer vision based precision agriculture tasks", "year": "2015" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b21", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Qibin Hou; Daquan Zhou; Jiashi Feng", "journal": "", "ref_id": "b22", "title": "Coordinate attention for efficient mobile network design", "year": "2021" }, { "authors": "Neil Houlsby; Andrei Giurgiu; Stanislaw Jastrzebski; Bruna Morrone; Quentin De Laroussilhe; Andrea Gesmundo; Mona Attariyan; Sylvain Gelly", "journal": "", "ref_id": "b23", "title": "Parameter-efficient transfer learning for nlp", "year": "2019" }, { "authors": "J Edward; Yelong Hu; Phillip Shen; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen", "journal": "ICLR", "ref_id": "b24", "title": "Lora: Low-rank adaptation of large language models", "year": "2022" }, { "authors": "Yuhao Huang; Xin Yang; Lian Liu; Han Zhou; Ao Chang; Xinrui Zhou; Rusi Chen; Junxuan Yu; Jiongquan Chen; Chaoyu Chen", "journal": "", "ref_id": "b25", "title": "Segment anything model for medical images?", "year": "2023" }, { "authors": "Md Jahidul Islam; Chelsey Edge; Yuyang Xiao; Peigen Luo; Muntaqim Mehtaz; Christopher Morse; Sadman Sakib Enan; Junaed Sattar", "journal": "", "ref_id": "b26", "title": "Semantic segmentation of underwater imagery: Dataset and benchmark", "year": "2020" }, { "authors": "Debesh Jha; Sharib Ali; Krister Emanuelsen; Steven A Hicks; Vajira Thambawita; Enrique Garcia-Ceja; Michael A Riegler; Thomas De Lange; Peter T Schmidt; Håvard D Johansen", "journal": "", "ref_id": "b27", "title": "Kvasir-instrument: Diagnostic and therapeutic tool segmentation dataset in gastrointestinal endoscopy", "year": "2021" }, { "authors": "Ding Jia; Yuhui Yuan; Haodi He; Xiaopei Wu; Haojun Yu; Weihong Lin; Lei Sun; Chao Zhang; Han Hu", "journal": "", "ref_id": "b28", "title": "Detrs with hybrid matching", "year": "2023" }, { "authors": "Lei Ke; Martin Danelljan; Xia Li; Yu-Wing Tai; Chi-Keung Tang; Fisher Yu", "journal": "", "ref_id": "b29", "title": "Mask transfiner for high-quality instance segmentation", "year": "2022" }, { "authors": "Lei Ke; Henghui Ding; Martin Danelljan; Yu-Wing Tai; Chi-Keung Tang; Fisher Yu", "journal": "", "ref_id": "b30", "title": "Video mask transfiner for highquality video instance segmentation", "year": "2022" }, { "authors": "Lei Ke; Mingqiao Ye; Martin Danelljan; Yifan Liu; Yu-Wing Tai; Chi-Keung Tang; Fisher Yu", "journal": "NeurIPS", "ref_id": "b31", "title": "Segment anything in high quality", "year": "2023" }, { "authors": "Kourosh Khoshelham; , L Díaz Vilariño; Michael Peter; Zhizhong Kang; Debaditya Acharya", "journal": "The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences", "ref_id": "b32", "title": "The isprs benchmark on indoor modelling", "year": "2017" }, { "authors": "Alexander Kirillov; Yuxin Wu; Kaiming He; Ross Girshick", "journal": "", "ref_id": "b33", "title": "Pointrend: Image segmentation as rendering", "year": "2020" }, { "authors": "Alexander Kirillov; Eric Mintun; Nikhila Ravi; Hanzi Mao; Chloe Rolland; Laura Gustafson; Tete Xiao; Spencer Whitehead; Alexander C Berg; Wan-Yen Lo; Piotr Dollar; Ross Girshick", "journal": "", "ref_id": "b34", "title": "Segment anything", "year": "2023" }, { "authors": "Philipp Krähenbühl; Vladlen Koltun", "journal": "NeurIPS", "ref_id": "b35", "title": "Efficient inference in fully connected crfs with gaussian edge potentials", "year": "2011" }, { "authors": "Alex Krizhevsky; Ilya Sutskever; Geoffrey E Hinton", "journal": "NeurIPS", "ref_id": "b36", "title": "Imagenet classification with deep convolutional neural networks", "year": "2012" }, { "authors": "Feng Li; Hao Zhang; Peize Sun; Xueyan Zou; Shilong Liu; Jianwei Yang; Chunyuan Li; Lei Zhang; Jianfeng Gao", "journal": "", "ref_id": "b37", "title": "Semantic-sam: Segment and recognize anything at any granularity", "year": "2023" }, { "authors": "Jianshu Li; Jian Zhao; Yunchao Wei; Congyan Lang; Yidong Li; Terence Sim; Shuicheng Yan; Jiashi Feng", "journal": "", "ref_id": "b38", "title": "Multiplehuman parsing in the wild", "year": "2017" }, { "authors": "Jun Hao Liew; Scott Cohen; Brian Price; Long Mai; Jiashi Feng", "journal": "", "ref_id": "b39", "title": "Deep interactive thin object selection", "year": "2021" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "ECCV", "ref_id": "b40", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Xuanyu Liu", "journal": "", "ref_id": "b41", "title": "A sam-based method for large-scale crop field boundary delineation", "year": "2023" }, { "authors": "Yahui Liu; Jian Yao; Xiaohu Lu; Renping Xie; Li Li", "journal": "Neurocomputing", "ref_id": "b42", "title": "Deepcrack: A deep hierarchical feature learning architecture for crack segmentation", "year": "2019" }, { "authors": "Ye Lyu; George Vosselman; Gui-Song Xia; Alper Yilmaz; Michael Ying; Yang ", "journal": "ISPRS journal of photogrammetry and remote sensing", "ref_id": "b43", "title": "Uavid: A semantic segmentation dataset for uav imagery", "year": "2020" }, { "authors": "Amirreza Mahbod; Gerald Schaefer; Benjamin Bancher; Christine Löw; Georg Dorffner; Rupert Ecker; Isabella Ellinger", "journal": "Computers in biology and medicine", "ref_id": "b44", "title": "Cryonuseg: A dataset for nuclei instance segmentation of cryosectioned h&e-stained histological images", "year": "2021" }, { "authors": "Lucy Alsina; Choque Mansilla; Paulo André Vechiatto De; Miranda ", "journal": "", "ref_id": "b45", "title": "Object segmentation by oriented image foresting transform with connectivity constraints", "year": "2019" }, { "authors": "Gonzalo Mateo-Garcia; Joshua Veitch-Michaelis; Lewis Smith; Silviu Vlad Oprea; Guy Schumann; Yarin Gal; Atılım Günes ¸baydin; Dietmar Backes", "journal": "Scientific reports", "ref_id": "b46", "title": "Towards global flood mapping onboard low cost satellites with machine learning", "year": "2021" }, { "authors": "Haoyu Maciej A Mazurowski; Hanxue Dong; Jichen Gu; Nicholas Yang; Yixin Konz; Zhang", "journal": "Medical Image Analysis", "ref_id": "b47", "title": "Segment anything model for medical image analysis: an experimental study", "year": "2023" }, { "authors": "Fausto Milletari; Nassir Navab; Seyed-Ahmad Ahmadi", "journal": "", "ref_id": "b48", "title": "V-net: Fully convolutional neural networks for volumetric medical image segmentation", "year": "2016" }, { "authors": "Khoa Dang Nguyen; Thanh-Hai Phung; Hoang-Giang Cao", "journal": "ICCVW", "ref_id": "b49", "title": "A sam-based solution for hierarchical panoptic segmentation of crops and weeds competition", "year": "2023" }, { "authors": "Xuebin Qin; Hang Dai; Xiaobin Hu; Deng-Ping Fan; Ling Shao; Luc Van Gool", "journal": "", "ref_id": "b50", "title": "Highly accurate dichotomous image segmentation", "year": "2022" }, { "authors": "Maryam Rahnemoonfar; Tashnim Chowdhury; Argho Sarkar; Debvrat Varshney; Masoud Yari; Robin Roberson Murphy", "journal": "IEEE Access", "ref_id": "b51", "title": "Floodnet: A high resolution aerial imagery dataset for post flood scene understanding", "year": "2021" }, { "authors": "Christos Sakaridis; Dengxin Dai; Luc Van Gool", "journal": "", "ref_id": "b52", "title": "Guided curriculum model adaptation and uncertainty-aware evaluation for semantic nighttime image segmentation", "year": "2019" }, { "authors": "Constantin Seibold; Simon Reiß; Saquib Sarfraz; Matthias A Fink; Victoria Mayer; Jan Sellner; Sung Moon; Klaus H Kim; Jens Maier-Hein; Rainer Kleesiek; Stiefelhagen", "journal": "", "ref_id": "b53", "title": "Detailed annotations of chest x-rays via ct projection for report understanding", "year": "2022" }, { "authors": "Tiancheng Shen; Yuechen Zhang; Lu Qi; Jason Kuen; Xingyu Xie; Jianlong Wu; Zhe Lin; Jiaya Jia", "journal": "", "ref_id": "b54", "title": "High quality segmentation for ultra high-resolution images", "year": "2022" }, { "authors": "Neil Shreyas S Shivakumar; Alex Rodrigues; Ian D Zhou; Vijay Miller; Camillo J Kumar; Taylor", "journal": "", "ref_id": "b55", "title": "Pst900: Rgbthermal calibration, dataset and segmentation network", "year": "2020" }, { "authors": "Karen Simonyan; Andrew Zisserman", "journal": "ICLR", "ref_id": "b56", "title": "Very deep convolutional networks for large-scale image recognition", "year": "2015" }, { "authors": "Konstantin Sofiiuk; Ilya A Petrov; Anton Konushin", "journal": "", "ref_id": "b57", "title": "Reviving iterative training with mask guidance for interactive segmentation", "year": "2022" }, { "authors": "Aditya Syed Waqas Zamir; Akshita Arora; Salman Gupta; Guolei Khan; Fahad Sun; Fan Shahbaz Khan; Ling Zhu; Gui-Song Shao; Xiang Xia; Bai", "journal": "", "ref_id": "b58", "title": "isaid: A large-scale dataset for instance segmentation in aerial images", "year": "2019" }, { "authors": "Peter Welinder; Steve Branson; Takeshi Mita; Catherine Wah; Florian Schroff; Serge Belongie; Pietro Perona", "journal": "", "ref_id": "b59", "title": "Caltech-ucsd birds 200", "year": "2010" }, { "authors": "Congcong Wen; Yuan Hu; Xiang Li; Zhenghang Yuan; Xiao Xiang Zhu", "journal": "", "ref_id": "b60", "title": "Vision-language models in remote sensing: Current progress and future trends", "year": "2023" }, { "authors": "Sanghyun Woo; Jongchan Park; Joon-Young Lee; In So Kweon", "journal": "", "ref_id": "b61", "title": "Cbam: Convolutional block attention module", "year": "2018" }, { "authors": "Zhuofan Xia; Xuran Pan; Shiji Song; Li Erran Li; Gao Huang", "journal": "", "ref_id": "b62", "title": "Vision transformer with deformable attention", "year": "2022" }, { "authors": "Jianwei Yang; Chunyuan Li; Xiyang Dai; Jianfeng Gao", "journal": "NeurIPS", "ref_id": "b63", "title": "Focal modulation networks", "year": "2022" }, { "authors": "Fisher Yu; Haofeng Chen; Xin Wang; Wenqi Xian; Yingying Chen; Fangchen Liu; Vashisht Madhavan; Trevor Darrell", "journal": "", "ref_id": "b64", "title": "Bdd100k: A diverse driving dataset for heterogeneous multitask learning", "year": "2020" }, { "authors": "Xiaoyu Yue; Shuyang Sun; Zhanghui Kuang; Meng Wei; Wayne Philip Hs Torr; Dahua Zhang; Lin", "journal": "", "ref_id": "b65", "title": "Vision transformer with progressive sampling", "year": "2021" }, { "authors": "Yi Zeng; Pingping Zhang; Jianming Zhang; Zhe Lin; Huchuan Lu", "journal": "", "ref_id": "b66", "title": "Towards high-resolution salient object detection", "year": "2019" }, { "authors": "Hao Zhang; Feng Li; Shilong Liu; Lei Zhang; Hang Su; Jun Zhu; Lionel Ni; Heung-Yeung Shum", "journal": "ICLR", "ref_id": "b67", "title": "DINO: DETR with improved denoising anchor boxes for end-to-end object detection", "year": "2023" }, { "authors": "Xizhou Zhu; Han Hu; Stephen Lin; Jifeng Dai", "journal": "", "ref_id": "b68", "title": "Deformable convnets v2: More deformable, better results", "year": "2019" }, { "authors": "Xizhou Zhu; Weijie Su; Lewei Lu; Bin Li; Xiaogang Wang; Jifeng Dai", "journal": "ICLR", "ref_id": "b69", "title": "Deformable detr: Deformable transformers for end-to-end object detection", "year": "2020" }, { "authors": "Xueyan Zou; Zi-Yi Dou; Jianwei Yang; Zhe Gan; Linjie Li; Chunyuan Li; Xiyang Dai; Harkirat Behl; Jianfeng Wang; Lu Yuan", "journal": "", "ref_id": "b70", "title": "Generalized decoding for pixel, image, and language", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 107.39, 549.35, 178.97, 30.32 ], "formula_id": "formula_0", "formula_text": "S = 1 B B i=1 IoU(M i , M union ),(1)" }, { "formula_coordinates": [ 4, 342.56, 675.57, 202.56, 11.72 ], "formula_id": "formula_1", "formula_text": "CAttn(t, x) = σ(Q(t) • K(x p ) T ) • V (x p ),(2)" }, { "formula_coordinates": [ 5, 124.35, 206.61, 162.02, 9.65 ], "formula_id": "formula_2", "formula_text": "∆p = θ s (θ offset (x p )),(3)" }, { "formula_coordinates": [ 5, 57.64, 430.94, 228.73, 12.69 ], "formula_id": "formula_3", "formula_text": "DCAttn(t, x) = σ(Q(t) • K(x ⋆ p+∆p ) T ) • V (x ⋆ p+∆p ).(4)" }, { "formula_coordinates": [ 5, 125.22, 704.2, 161.15, 9.65 ], "formula_id": "formula_4", "formula_text": "α = σ(MLP(t o )) • s,(5)" }, { "formula_coordinates": [ 5, 341.2, 195.56, 203.91, 12.69 ], "formula_id": "formula_5", "formula_text": "O(t, x) = CAttn(t, α 1 • x ⋆ p+∆p + α 2 • x p )(6)" } ]
10.18653/v1/W18-5446
2023-11-27
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b21", "b22" ], "table_ref": [], "text": "Deep neural networks (DNNs) achieve outstanding performance on several challenging computer vision tasks such as image classification [1], object detection [2] and semantic segmentation [3], as well as natural language processing benchmarks such as text classification [4]. However, their accuracy comes at a high computational inference cost which limits their deployment, more so on edge devices when real-time treatment as well as energy consumption are a concern. This problem can be tackled via DNN quantization, i.e. by reducing the bit-width representation of the computations from floating point operations (FP) to e.g. int8 (8-bits integer representation), int4, int3 or even lower-bit representation such as ternary (where weights values are either -1, 0 or +1) quantization. Because DNN inference principally relies on matrix multiplication, such quantization dramatically diminishes the number of bit-wise operations (as defined in [5]), thus limiting the DNN latency and energy consumption. However, DNN quantization usually comes at the expense of the network accuracy. As a consequence, DNN quantization is an active field of research [6,7,8,9,10,11,12,13] that aims at limiting this accuracy drop while reducing the number of bit-wise operations.\nAll the aforementioned methods are data-driven, as they either involve training a network from scratch or fine-tune an already trained and quantized one. However, while such approaches usually allow lower quantization errors using low bit-wise representations, due to the growing concerns on privacy rights and data privacy, there is an ever-increasing number of real-case scenarios (e.g. health and military services) where data may not be available for quantization purposes. Motivated by these observations, recently, several data-free quantization algorithms were published [14,15,16,17,18,19], focusing on the quantization operator, i.e. the transformation which maps the floating point weights to their low-bit, fixed point, values. However, these approaches still struggle to offer an interesting alternative to data-driven techniques in terms of accuracy preservation. Furthermore, when considering a specific target device for deployment, traditional quantization methods, usually focusing on the quantization operator, offer limited options: given a supported bit width (given by the device, as most hardware usually support only a few representation formats [20]) they either achieve satisfactory accuracy or not.\nIn our previous work, REx [21], we introduced a novel residual expansion of quantized deep neural networks. Drawing inspiration from wavelets-based methods for image compression [22,23], this approach considers successive residual quantization errors between the quantized and original model. As such, REx can provide several accuracy vs. speed trade-off points for each bit width. Increasing the number of residuals in the expansion (i.e. the expansion order) increases the fidelity to the original, non-quantized model at the expanse of additional computations. In addition, we proposed a group-sparse expansion, which allows us to maintain the accuracy using significantly less bit operations.\nIn this work, we propose an extension of this work, dubbed PIPE, that leverages parallelization capabilities of modern hardware. Formally, from a residual expansion of a quantized model, we can group together several terms in the expansion and, under certain assumptions, completely separate the computation between several subnetworks of a resulting ensemble. This ensemble approximation, depending on the parallelization capacities of the target hardware, may result in dramatic speed enhancement. Therefore, given a target device, our approach allows finding the best accuracy vs. speed trade-offs. Our contributions are thus three-fold:\n• PIPE , a data-free quantization method that is both efficient and flexible. PIPE leverages residual quantization, with an ensemble approximation, to enable finding suitable trade-offs depending on a target bit-width and parallelization capacity.\n• Theoretical guarantees on both the exponential convergence of the quantized model towards the full-precision model, and ensemble approximation errors. This is of paramount importance in a data-free context, where we cannot easily measure the accuracy degradation.\n• Extensive empirical validation we show through a thorough validation that PIPE significantly outperforms every state-of-the-art data-free quantization technique as a standalone method but also helps improve said methods when used in combination. In particular, PIPE achieves outstanding performances on both standard and low bit range quantization on various Con-vNet architectures applied to ImageNet classification, Pascal VOC object detection and CityScapes semantic segmentation.\nIn addition, PIPE is agnostic to the quantization operator and can be combined with most recent state-of-the-art methods that focus on the latter.\n2 Related Work" }, { "figure_ref": [], "heading": "Quantization", "publication_ref": [ "b6", "b7", "b8", "b9", "b10", "b23", "b24", "b25" ], "table_ref": [], "text": "In this section, we review existing methods for DNN quantization, with an emphasis on approaches geared towards run-time acceleration. The vast majority of DNN quantization techniques rely on data usage (Quantization-Aware Training) and [7,8,9,10,11,24,25] usually rely on variants of the straight through estimation to alleviate the rounding operation gradients. Among these methods, [26] bears the most resemblance with the proposed PIPE method. It minimizes the residual error during training, using weight decay over the residue. The similarity with PIPE comes from the use of a second order expansion of the quantization errors. However, it discards the quantization error after training, while we propose to keep the extra operations in order to ensure a high fidelity to the provided pre-trained model." }, { "figure_ref": [], "heading": "Data-Free Quantization", "publication_ref": [ "b13", "b26", "b4", "b27", "b4", "b28", "b14" ], "table_ref": [], "text": "Nagel et al. [14] discuss the necessity to have data available to successfully design a quantization pipeline. The proposed method consists in balancing the weight ranges over the different layers of a model, using scale invariance properties (similarly to [27]) that are specific to piece-wise affine (e.g. ReLU) activation functions, and relying on a traditional, naive quantization operator [5]. The authors note that the magnitude of the quantization error strongly varies with the DNN architecture: as such, already compact architectures such as MobileNets [28] appear as challenging for data-free quantization purposes (for instance, authors in [5] report a dramatic drop to chancelevel accuracy without fine-tuning). Lin et al. [29] studied the properties of the noise induced by the quantization operator. These properties were later used in SQNR [15], a method that consists in assigning, for each layer, an optimal bit-width representation. Overall, data-free approaches generally struggle to deal with low-bit representation problem, i.e. performing quantization into bit widths lower than int4 (e.g. int3 or ternary quantization). However, the proposed method successfully addresses this challenge with the addition of the residual errors in higher order expansions. Furthermore, to compensate for the increased latency, we propose a budgeted expansion as well as a rewriting of the quantized expansion as ensembles for further parallelization." }, { "figure_ref": [], "heading": "Flexibility in Quantization", "publication_ref": [ "b29", "b30", "b19", "b31" ], "table_ref": [], "text": "In practice, the existing data-free quantization methods only offer a single possible quantized model given a supported bit-width. Nevertheless, most pieces of hardware do not support a wide range of bit-width. For instance, Turing [30] and Untether AI [31] architectures support int4 and int8 quantization while the Nvidia A100 [20] supports int8, int4 and binary (int1) quantization. Conversely, PIPE circumvents this limitation by offering several trade-offs given a bit-width representation. As discussed by [32], hardware cost is for the most part derived from energy consumption. Consequently, quantizing models to supported bit width is a problem of paramount importance." }, { "figure_ref": [], "heading": "Ensemble Methods", "publication_ref": [ "b32", "b33", "b34", "b35", "b36", "b37", "b38", "b39", "b40", "b40", "b40" ], "table_ref": [], "text": "Ensemble methods [33] are ubiquitous and widely studied models in the machine learning community, where weak, yet complimentary predictors are aggregated via e.g. bagging [34], boosting [35] or gradient boosting [36], to achieve superior performance. Leveraging deep learning and ensemble methods crossovers is still an overlooked subject, partly because standalone DNNs are usually quite robust on their own, and because DNNs already involve a high computational burden. Nevertheless, some methods [37,38,39,40] leveraged deep ensembles to great success for various applications. Of particular interest is the work of Zhu et al. [41] which consists in learning ensembles of binary neural networks (BNNs) to reach interesting accuracy vs. inference speed trade-offs, thanks to the potential weakness of predictors working with very lowbit representations such as BNNs, the potential complementarity between these, as well as the fact that, intrinsically, ensembles can be easily parallelized in practice. Our method offers several advantages over [41]: First, PIPE is applied to existing DNNs in a data-free manner, thus can be applied to accelerate already performing networks without bells and whistles. More importantly, accuracy of the BNN ensembles is significantly lower than that of the original full-precision model, while PIPE achieves high acceleration without significant accuracy loss. In addition, results reported in [41] are admittedly unstable as the ensembles grow, which the authors attribute to overfitting. PIPE , however, is robust to such problems, as we demonstrate a convergence to the original accuracy with respect to the order of expansion." }, { "figure_ref": [], "heading": "Methodology overview", "publication_ref": [ "b4", "b41" ], "table_ref": [], "text": "Let's consider F, a trained network with L layers and trained weights W l . Given a target integer representation in b bits, e.g. int8 or int4, we consider a quantization operator Q. Formally, Q maps [min{W l }; max{W l }] ⊂ R to the quantized interval [-2 b-1 ; 2 b-1 -1] ∩ Z. The most straightforward way to do so is to apply a scaling s l and round ⌊•⌉ the scaled tensor, i.e.:\nQ(W l ) = W l s W l(1)\nWith s l the quantization scale for W l computed as in [5] without loss of generality. Following the standard formulation [42], a quantization operator Q, comes with a de-quantization operator Q -1 . For the simple quantization operator Q in Equation ( 1), a natural choice is\nQ -1 (Q(W l )) = s l × Q(W l ).\nNote that, despite the notation, Q -1 is not a true inverse, as by definition of the quantized space, there is some information loss. This loss, called the quantization error, is defined as:\nW l -Q -1 (Q(W l )).\nIn data-free quantization, we want to minimize this error in order to achieve the highest possible fidelity to the original model. In the following section, we describe how we can efficiently reduce the quantization error for a fixed target bit-width b." }, { "figure_ref": [ "fig_0" ], "heading": "Residual Expansion", "publication_ref": [ "b20", "b20", "b20" ], "table_ref": [], "text": "We propose to quantize the residual errors introduced by the quantization process. Although the proposed method can be applied to any tensor, let's consider a weight tensor W. In the fullprecision space (R), its first approximation is R 1 = Q -1 (Q(W)). To reduce the quantization error,\nF (4) ( x) (a) residual expansion x R (k=1) R (k=2) R (k=3) R (k=4) R (k=1) R (k=2) R (k=3) R (k=4) F (4) ( x) x R γ =0.5 (k=1) R γ =0.5 (k=2) R γ =0.5 (k=3) R γ =0.5 (k=4) R (k=1) R γ =0.5 (k=2) R γ =0.5 (k=3) R γ =0.5 (k=4) F(4) ( x) x R γ =0.5 (k=1) R γ =0.5 (k=2) R γ =0.5 (k=3) R γ =0.5 (k=4) R (k=1) R γ =0.5 (k=2) R γ =0.5 (k=3) R γ =0.5(k=4)\n(b) group-sparse expansion (c) ensemble expansion we define R 2 as the quantized residual error\nR 2 = Q -1 Q W -R 1(2)\nConsequently, during the quantized inference, we compute R 1 X + R 2 X ≈ WX which provides a finer approximation than the simple evaluation R 1 X. For the sake of generality, we will not necessarily assume that all weights were expanded, i.e. some weights may have been pruned like in [21]. The process can be generalized to any expansion order K.\nR K = Q -1        Q        W - K-1 k=1 R k              (3)\nThe resulting expanded layer is illustrated in Figure 1 (a) in the case K = 4. Intuitively, an expansion (R 1 , ..., R K ) provides the approximation K k=1 R k of W and this approximation converges exponentially fast to the original full-precision weights with respect to K. As the support of the quantization error space is smaller than one quantization step, the error decreases by a factor larger than 2 b with each expansion term. Furthermore, as the quantization error decreases, it is expected that the prediction of the quantized model shall achieve a closer match to the original predictions. This is especially important in the context of data-free quantization, as we do not have the option to perform fine-tuning to recover accuracy. Worst, we also can not evaluate the degradation of the model on a calibration/validation set. Nonetheless, in [21] we provided an upper bound on the maximum error ϵ max introduced by residual quantization on the predictions, as\nϵ max ≤ U = L l=1        l i=1 1 2 b-1 -1 K-1 s R i 2 + 1        -1 (4)\nwhere s i is the scaling factor from equation 1 applied to each residue. This implies that, in practice and regardless of the quantization operator, a network can be quantized with high fidelity with only a few expansion orders to fit a given bit-width. \nterms R K 1 +•••+K m-1 +1 l . . . R K 1 +•••+K m l\n, and (b) the bias terms for all layers appear only in Network 1. At inference time, the input is fed to all the M networks, resulting in efficient parallelization at virtually no cost in term of accuracy, depending on the expansion term clustering.\nIn its most general expression, the residual expansion can be defined with a pruning mask M k for each residual. The pruning mask can be applied in either a structured or unstructured fashion, depending on the desired outcome. For instance, we showed that it is very effective to handle LLMs outliers [21].\nR K = M K Q -1        Q        W - K-1 k=1 R k              (5)\nIn the case of unstructured and structured pruning, M K is a sparse tensor or tensor was row-wise zeros, respectively. In the remainder of this study, we will assume a structured mask (simpler to leverage) unless stated otherwise. In practice, the pruning ratio is given by a sparsity parameter γ in order to match a total budget of operations with K. This sparse expansion is guaranteed to converge faster than the standard one to the original weight values.\nIn the resulting expanded quantized network, the inference of the residual weights R k can be performed in parallel for each layer. In PIPE , we enable to perform the inference of residual weights across different layers in parallel through ensembling." }, { "figure_ref": [ "fig_0", "fig_0", "fig_1", "fig_0" ], "heading": "Ensembling from Expansion", "publication_ref": [], "table_ref": [], "text": "So far (see Figure 1 (a-b)) each layer computes the R k × X for all orders k. As previously stated, this formulation allows finding better trade-offs depending on hardware capacities. However, it does not fully exploit the potential parallelization capacities of the hardware, as the results from all these expansion orders have to be summed before applying the non-linearity. Intuitively, a better way to leverage parallelization would be to only sum the results after the last layer (Figure 1 (c)), akin to an ensemble model where the elements in the ensemble corresponds to (a clustering of) the different expansion orders k.\nTo do so, we exploit two assumptions. First, the activation functions σ of the model satisfy the following:\nσ(• + ϵ) ≈ σ(•) + σ(ϵ)(6)\nWhen |ϵ| → 0. This holds true for most popular activation functions such as ReLU, SiLU, GeLU or sigmoid. Second, the exponential convergence of the expansion terms (Equation 4) ensures that the first expansion orders are preponderant w.r.t. the subsequent ones. With this in mind, we can group the expansion orders in M clusters that each contain K m successive expansion orders, with\nm ∈ [|1, M|] and [K 1 , . . . , K M ] (K 1 + • • • + K M = K\n) the total number of orders in the expansion.\nFor each of these clusters, the sum of the expansion orders that are contained inside must have negligible dynamics with regard to the previous cluster (see Appendix A on how to empirically group expansion orders) to successively apply the approximation in Equation 6. Finally, we can define the M quantized networks of the ensemble as having the same architecture as the original model F, except that the biases of the model are all assigned to F 1 . For each m = 1, . . . , M, the weights of the m th predictor corresponds to the sum over the residuals at orders belonging to the m th cluster. This ensemble approximation is illustrated on Figure 2. Proof of this approximation can be found in Appendix B. This ensemble approximation (Figure 1 (c)) also comes with strong theoretical guarantees (see Appendix C) on accuracy preservation depending on expansion order grouping. Furthermore, it allows better usage of the potential parallelization capacities of a target hardware (per-model instead of per-layer parallelization), as will be shown in the upcoming experiments." }, { "figure_ref": [], "heading": "Quantization Experiments", "publication_ref": [], "table_ref": [], "text": "In the following sections, we first go through the implementation requirements and efficient strategies to fully leverage the proposed expansions. Second, we perform a comparison of each expansion method in order to show the flexibility of PIPE with respect to the bit-width. Third, we compare PIPE to other quantization schemes under the constraint of equal bit operations as well as under the assumption of heavy parallelization for ensembling. Finally, we validate for each expansion their respective upper bound on the maximum error with respect to the original predictions." }, { "figure_ref": [], "heading": "Implementation Details and Benchmarks", "publication_ref": [ "b42", "b43", "b44", "b45", "b41", "b46", "b47", "b48" ], "table_ref": [], "text": "We ran our tests on 6 different backbones, including ConvNets and transformers, and 4 tasks from both computer vision and natural language processing. We used ImageNet [43], Pascal VOC 2012 [44], CityScapes dataset [45] and GLUE [46].\nUnless stated otherwise, we apply symmetric, static, per-channel quantization as defined in [42] and perform batch-normalization folding prior to any processing using the optimal method by [47]. In order to leverage the existing efficient implementations of the convolutional layers and fully-connected layers in CUDA, we propose to implement the expanded layer using a single kernel rather than K kernels. This is achieved by concatenating the kernels along the output dimension. Consequently, the challenge of efficiently splitting the computations to fully leverage the target device computational power is left to the inference engine. In practice, this results in both better performance and less work, in order to adapt the method to existing engines such as OpenVino [48] and TensorRT [49]. Furthermore, the sparse expansion does not use sparse matrix multiplications as sparsity is applied to the neurons (or channels for ConvNets). The libraries, pre-trained model checkpoints and datasets information, are detailed in Appendix D. We evaluate the pre-processing time required by PIPE and compare it to other generic data-free quantization methods in Appendix E. In the following section, we confirm the hinted benefits from each of the proposed expansions." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Flexible Quantization", "publication_ref": [ "b4" ], "table_ref": [], "text": "Figure 3 shows different trade-offs enabled by PIPE on different bit-widths for an EfficientNet-B0 on ImageNet. First, the baseline quantization with the baseline quantization operator from [5] (as depicted by the pentagon of different colors-one for each bit width) offers no trade-off possibility given a specific bit-width and usually performs poorly below int8 quantization (e.g. barely reaching 20.290% top1 accuracy in W6/A6 quantization). PIPE , however, in the same setup, enables finding several trade-offs for each specific bit-width (e.g. int4 and ternary on Figure 3) and supporting hardware. Furthermore, the sparse expansion enables finding more potential trade-offs (by varying the budget and expansion order) for every bit-width. Those trade-offs are generally more interesting than comparable ones obtained using the base expansion.\nWe do not require the use of extremely sparse residues in order to get the best accuracy. For instance, in Figure 3 we reach full-precision accuracy using 25% sparse residues. In other words, the process converges fast with respect to the sparsity rates. All in all, these results demonstrate the flexibility of PIPE to find good accuracy v.s. speed trade-offs, given a budget of total bit operations (BOPs) to fit. In the following section, we evaluate the ability of PIPE to outperform existing quantization methods in terms of equal bops as well as in the context of heavy parallelization." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Equal BOPs", "publication_ref": [ "b4", "b3", "b49" ], "table_ref": [ "tab_0", "tab_1" ], "text": "In order to highlight the benefits of residual quantization errors expansions as a stand-alone improvement upon existing methods with equal BOPs, we compare PIPE using the naive quantization operator from [5] on a variety of reference benchmarks. First, in Table 1, we report the performance on three different computer vision networks between state-of-the-art methods in W6/A6 quantization (other setups are discussed in Appendix F) and PIPE using a sparse expansion at order K = 2 using 50% of a 4 bits representations in order to get a similar total number of bit operations (150% of 4 bits ≈ 6 bits). For all networks, PIPE significantly outperform recent stateof-the-art data-free quantization methods at equal BOPs. Furthermore, we confirm these results 2, we perform a similar experiment on NLP using Bert [4]. Similarly to our results on ConvNets, PIPE can find better accuracy per bit trade-offs as compared to the fours references, including non-uniform quantization [50]. Furthermore, if we consider parallelization, PIPE can offer even higher accuracy results using ensemble approximation, as shown in what follows." }, { "figure_ref": [ "fig_3" ], "heading": "Leveraging Parallelization", "publication_ref": [ "b18", "b50", "b12" ], "table_ref": [ "tab_2" ], "text": "On large devices using CPUs or GPUs for inference, parallelization of the computations within a layer or a model can drastically reduce the runtime. In Figure 4, we showcase the normalized inference times (i.e. the ratio between the runtime of the expanded networks and the baseline quantized model) for several ResNets and MobileNets on ImageNet. On the one hand, as indicated by the dashed plots (no ensembling), the relative inference time grows sub-linearly for each network, e.g. order 2 comes at virtually no cost in terms of inference time, while using higher orders may induce a noticeable slow-down: < 50% speed reduction for order K = 3, and about 100% at order K = 5. On the other hand, when we evaluate ensembles (plain lines), and specifically two predictors with similar sizes (see Appendix A), we observe that we can expand the quantized models up to order 4 without noticeably degrading the inference speed even a small CPU such as the intel m3. This is due to the more efficient parallelization using the proposed ensemble approximation.\nConsequently, in Table 3 we compare PIPE to other data-free quantization methods under this assumption of heavy parallelization (and asymmetric quantization). We consider an ensemble of two weak predictors with each 2 expansion orders: R 1 , R 2 for the first predictor and R 3 , R 4 for the second. Our results on the challenging MobileNet show that, as compared to the most recent data-free quantization operators that do not use data-generation (no-DG) such as SQuant [19] and SPIQ [51], the proposed PIPE method improves the accuracy by 16.35 points in 4 bits quantization. Furthermore, data-free techniques that leverage synthetic data are usually known for their higher performance as compared to the methods that only focus on the quantization operator. Nonetheless, PIPE still manages to improve upon recent techniques including IntraQ [13] and AIT [12] by 5.26 points in the advantageous and realistic assumption of heavy parallelized inference. These observations can be generalized to more DNN architectures, as discussed in Appendix G. We therefore conclude that PIPE allows to find better accuracy v.s. speed trade-offs given the possibility to leverage parallelization on larger hardware.\nEnsemble Ensemble" }, { "figure_ref": [], "heading": "Empirical Validation of the Theoretical Bounds", "publication_ref": [ "b51" ], "table_ref": [ "tab_3", "tab_3" ], "text": "In Table 4, we validate the proposed upper bound U on the maximum error on the predictions (see Equation 4) on a VGG-16 [52] trained on ImageNet. The tightness of the provided theoretical results can be estimated from the gap between our estimation and the empirical maximum error U empirical from quantization on the predictions, which is measured as the infinite norm between the full-precision and quantized logits. We observe that a naïve 8-bits quantization (i.e. no expansion) leads to an upper bound U = 0.12, while we observe U empirical = 0.05. Compare with the norms of the logits, which in this case is equal to 0.3423: as such, the proposed upper bound appears as relatively tight and significantly lower than the logits magnitude. In such a case, due to overconfidence, the error shall not, in theory, affect the classification. The proposed upper bound is even tighter for larger values of K, and becomes lower and lower (for both the theoretical and corresponding empirical maximum errors) when introducing sparsity. Last but not least, we see on the last two rows in Table 4 that U stays tight when using the ensemble approximation. This further demonstrates the good properties of the proposed expansion, sparse expansion and ensemble approximation in PIPE in addition to the relevance of its theoretical guarantees, which are critical in data-free quantization. " }, { "figure_ref": [], "heading": "Flexibility with respect to the Quantization Operator", "publication_ref": [ "b18", "b50", "b52", "b53" ], "table_ref": [ "tab_4" ], "text": "Most recent approaches for data-free quantization focus on designing better quantization operators. Interestingly, our approach is agnostic to the choice of the quantization operator and can thus be combined with these approaches without bells and whistles. In Table 5, we report the possible trade-offs achievable with PIPE combined with recent approaches focusing on the quantization operator, on MobileNet V2. The different trade-offs are sorted in ascending order in terms of added overhead operations, e.g. W4 + 25% leads to fewer operations than W4 + 50% . First, when used with SQuant [19], PIPE achieves full-precision accuracy in W4/A4 with only 75% overhead, even outperforming W8/A8 quantization. SPIQ [51], can also be adapted with PIPE in order to achieve good accuracy using only 4 bits representation as it benefits from finer weight quantization. This explains the slightly higher accuracies than SQuant using 25% and 50% sparsity. Finally, with AdaRound [53] and BrecQ [54], two PTQ techniques, we observe similar results as expected. In particular, BrecQ which already achieves decent accuracy in W4/A4 with a 5.23 points accuracy drop gets closer to the original accuracy (0.86 point accuracy drop) using a quarter of the expansion. In such a case, if the target hardware only has support for W4 and W6, PIPE shall allow using W4 quantization with a small overhead due to the addition of the sparse expansion term (which can be parallelized using the proposed ensemble approximation if the hardware supports it), whereas most methods would stick with W6 quantization to preserve the accuracy of the model. Thus, we believe that those results illustrate the adaptability of the proposed framework." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b20" ], "table_ref": [], "text": "In this work, we proposed a novel ensemble approximation for the residual expansion of the quantization error to design a novel flexible data-free quantization method. In order to find the best accuracy v.s. speed trade-offs, we proposed to consider the residuals of the quantization error, along with a group-sparse formulation. Furthermore, we showed that, under certain assumptions, the terms in the residual expansion can be grouped together at the whole network level, resulting in an ensemble of (grouped together) residual expansion networks. This ensemble approximation allows to efficiently leverage the parallelization capacities of the hardware, if any.\nThe proposed method, dubbed PIPE , is flexible with respect to both hardware bit-width support and parallelization capacities. In addition, PIPE is backed up with strong theoretical guarantees which, critical in the context of data-free quantization (where one can not empirically estimate the accuracy degradation), as we provide a tight upper bound on the error caused by the residual quantization and ensemble approximations.\nThrough extensive experimental validation, we showed the benefits of the proposed approach. As such, PIPE significantly outperforms recent data-free quantization methods on a wide range of ConvNet architectures applied to ImageNet classification, Pascal VOC object detection, CityScapes semantic segmentation as well as transformers on GLUE text classification. Furthermore, as showed in previous work [21], the proposed framework also constitutes an elegant way of dealing with outlying values in LLMs: as such, it appears as an ideal choice for designing flexible quantization approaches to further reduce the memory footprint and latency of DNNs. This is of paramount importance since nowadays models become more and more parameter and computation hungry. Last but not least, the ideas presented in this paper are orthogonal to most recent approaches focusing on improving the quantization operator, and hence can straightforwardly be combined with those approaches." }, { "figure_ref": [], "heading": "Limitations:", "publication_ref": [], "table_ref": [], "text": "The residual expansion method introduced in this paper does not adapt to the inter-layer importance and runtime cost discrepancies. An interesting future work would thus consist in applying more expansion orders on the most important layers w.r.t. the model accuracy, as well as using fewer orders for the most computationally expensive layers. R 100 %\n(2)" }, { "figure_ref": [], "heading": "Ft32", "publication_ref": [ "b3", "b3" ], "table_ref": [], "text": "MobileNet ResNet \nk 1 =12 k 1 =6 k 1 =1 k 1 =12 k 1 =6 k 1 =1 k 1 =1TNN\nR 25 % (13) R 25 %3,2,2,2,2,2] [2,2,2,2,2,2,1]\nR 25 %\n(13) R 25 %(13)\nR 25 %\n(13) R 25 %(13)\nwithout ensembling with ensembling have to be fixed carefully so that the ensemble shall be faster than the developed network, without accuracy loss. Fortunately, the accuracy behavior w.r.t. the value of K 1 can be estimated from the values of the upper bound U (Lemma Appendix C.3) on the expected error from ensembling\nE[∥ f (K) -f (K) ∥].\nAs illustrated on Figure A.5 in the case of ternary quantization, this upper bound is tight and collapses more than exponentially fast w.r.t. K 1 . For instance, if K 1 ≤ 2, U is significantly larger than the amplitude of the logits E[∥ f (K) ∥] and the accuracy is at risk of collapsing. When U vanishes compared to E[∥ f (K) ∥], the ensemble and regular expansions are guaranteed to be almost identical, and the accuracy is preserved. Thus, we can compare the upper bound U and the empirical norm of the logits from the expansion E[∥ f (K) ∥] to assess the validity of an ensemble.\nPlus, E[∥ f (K) ∥] can be estimated using statistics from the last batch norm layers to allow for a fully data-free criterion. With this in mind, in Figure A.6 we compare the top-1 accuracies of f (K) and f (K) for different architectures (MobileNet v2 and ResNet 50) and quantization configurations. The ensemble expansion systematically matches the accuracy of the developed network in terms of accuracy, except in the case of ternary quantization, when K 1 = 1. This is remarkable, as ensembling significantly decreases the inference time with a two predictors configuration.\nFigure A.7 shows the results obtained with larger ensembles of smaller quantized predictors, i.e. with M > 2. We observe the full preservation of the accuracy of the developed network as long as K 1 ≥ 4 and a loss of 6 points for balanced ensembles of 5 -6 predictors and K 1 = 3. Here again, with M = 7 and K 1 = 2, the accuracy is very low, as predicted by A.5. To sum it up, ensembling developed networks allows to significantly decrease the inference time, with theoretical guarantees on the accuracy preservation.\nFinally, Table A.6 shows the runtime of a ResNet 50 for a full evaluation on ImageNet validation set (50,000 images). We tested the models on different devices (CPU/GPU) using a fixed budget β = 7 and order K = 8, and compared ensembles expansions (with 2 [4,4], 3 [3, 3, 2] and 4 [2, 2, 2, 2] predictors). On each device, the ensembles are up to 10 times faster than the baseline expansion.\nTable A.6: Comparison of the evaluation time in seconds of a ResNet 50 over the validation set of ImageNet using an expansion of order k = 8 with ensembling of m predictors m ∈ {1, 2, 3, 4}. We distinguish the setups by reporting the values of [K 1 , ..., \nK m ]. device [8] [4, 4] [3, 3, 2] [2, 2, 2, 2] [1] expansion ✓ ✓ ✓ ✓ ✗ ensembling ✗ ✓ ✓ ✓ ✗ Intel(R) i9-9900K" }, { "figure_ref": [], "heading": "Appendix B Ensembling Protocol", "publication_ref": [ "b20" ], "table_ref": [], "text": "We recall some results from previous work [21]:\nLemma Appendix B.1.\nLet f be a layer with weights W ∈ R n with a symmetric distribution. We denote R (k) the k th quantized weight from the corresponding residual error. Then the error between the rescaled W (K) = Q -1 (R (K) ) and original weights W decreases exponentially, i.e.:\nw\n- K k=1 w (k) ≤ 1 2 b-1 -1 K-1 (λ R (K) ) i 2 (B.1)\nwhere w and w (k) denote the elements of W and W (k) and (λ R (k) ) i denotes the row-wise rescaling factor at order k corresponding to w, as defined in equation 1.\nLemma Appendix B.2. Let f be a layer of real-valued weights W with a symmetric distribution.\nThen we have w -\n       K-1 k=1 w (k) + Q -1 R (K) γ        ≤ N (K) • 1 (K) γ ∞ (λ R (k) ) i 2 b-1 -1 K 2 (B.2)\nwhere ∥∥ ∞ is the infinite norm operator with the convention that ∥0∥ ∞ = 1 and (λ R (k) ) i denotes the row-wise rescaling factor at order K corresponding to w.\nLemma Appendix B.1 and Appendix B.2 state that the first terms in the expansion, i.e. the lower values of k, are preponderant within the magnitude before the activation. Moreover, the activation functions traditionally used in DNNs (e.g. ReLU) satisfy σ(x + ϵ) ≈ σ(x) + σ(ϵ) when |ϵ| → 0. With respect to the proposed expansion, x corresponds to the first orders and ϵ to the terms of higher orders. In the case of two-layers networks, these properties allow us to break down the network in an (approximately) equivalent ensemble of two networks, the first one containing the first, largest orders, and the second one containing the remaining ones.\n1 respectively, we define the quantization expansion of residual errors (R (k) 1 ) k∈{1,...,K} of order K as in\nf (K) : x → σ        K k=1 R (k) Q(x)λ R (k) λ x + b        . (B.3)\nLemma Appendix B.1 states that the first terms in the sum, i.e. the lower values of k, are preponderant in the pre-activation term. Thus, there exists\nK 1 < K such that f (K) 1 ≈ f (K) 1 = f (K) 1,1 + f (K) 1,2 with:            f (K) 1,1 : x → σ K 1 k=1 R (k) 1 x q λ R (k) 1 λ x + b 1 f (K) 1,2 : x → σ K k=K 1 +1 R (k) 1 x q λ R (k) 1 λ x (B.4) Furthermore F (K) : x → f (K) 2 ( f (K) 1 (x)). Let R (k)\n2 and b 2 respectively denote the kernel and bias weights of the second layer f (K) 2 . By linearity of the last layer, we have\nF (K) ≈ F(K) = K k=1 R (k) 2 f (K) 1,1 λ R (k) 2 λ f (K) 1,1 + b 2 + K k=1 R (k) 2 f (K) 1,2 λ R (k) 2 λ f (K) 1,2 (B.5)\nStemming from this formulation, we can express the quantized network f (K) as an ensemble of quantized neural networks which share a similar architecture, i.e. F (K) ≈ F(K) = g(K) + h(K) . This defines the ensemble expansion from residual errors of order K." }, { "figure_ref": [], "heading": "Appendix B.2 Ensemble of more Layers DNNs", "publication_ref": [ "b18", "b16", "b17", "b55", "b56" ], "table_ref": [], "text": "Similarly, we demonstrate by structural induction that a network with arbitrary number of layers L can be approximated by an ensemble expansion F(K) composed of M quantized networks, defined by the parameters K 1 , ..., K M setting the size of each predictor, such that K 1 + • • • + K M = K (the total number of orders in the expansion).\nWe recall that f\n(K) L-1 (x) = σ L-1 K k=1 R (k) L-1 λ R (k) L-1 X f (K) L-1 + b L-1 .\nIf we directly apply equation B.4\nthen we get for a given\nK L-1 < K X f (K) L-1 →σ L-1        K L-1 k=1 R (k) L-1 X f (K) L-1 λ X f (K) L-1 λ R (k) L-1 + b L-1        + K k=K L-1 +1 R (k) L-1 X f (K) L-1 λ X f (K) L-1 λ R (k) L-1 (B.6)\nHowever the two terms X f (K) L-1 (x) inside and outside the activation function are not independent. Furthermore, the terms that compose X f (K) L-1 (x) , from equation C.13, do not have the same range values, i.e. g(K) L-2 (X g(K)\nL-2\n)λ g(K) L-2 >> h(K) L-2 (X h(K) L-2 )λ h(K) L-2\n. We define the operation * as follows\nf (K) L-1 (x) = σ L-1 K L-1 k=1 R (k) L-1 g(K) L-2 (X g(K) L-2 ) × λ R (k) L-1 λ g(K) L-2 + b L-1 + K k=K L-1 +1 R (k) L-1 g(K) L-2 (X g(K) L-2 )λ R (k) L-1 λ g(K) L-2 + σ L-1 K L-1 k=1 R (k) L-1 h(K) L-2 (X h(K) L-2 ) × λ R (k) L-1 λ h(K) L-2 + K k=K L-1 +1 R (k) L-1 h(K) L-2 (X h(K) L-2 )λ R (k) L-1 λ h(K) L-2 (B.7)\nNow, we have two independent functions g(K)\nL-1 and h(K)\nL-1 such that f (K) L-1 = g(K) L-1 + h(K)\nL-1 , these functions have independent inputs and\n                                   g(K) L-1 (X g(K) L-2 ) = σ L-1 K L-1 k=1 R (k) L-1 g(K) L-2 (X g(K) L-2 ) × λ R (k) L-1 λ g(K) L-2 + b L-1 + K k=K L-1 +1 R (k) L-1 g(K) L-2 (X g(K) L-2 )λ R (k) L-1 λ g(K) L-2 h(K) L-1 (X h(K) L-2 ) = σ L-1 K L-1 k=1 R (k) L-1 h(K) L-2 (X h(K) L-2 ) × λ R (k) L-1 λ h(K) L-2 + K k=K L-1 +1 R (k) L-1 h(K) L-2 (X h(K) L-2 )λ R (k) L-1 λ h(K) L-2 (B.8)\nThis defines an iterative procedure in order to define our ensembling of expansions of residual errors for a feed-forward neural network f with any number L of layers.\nTo sum it up, the resulting predictors share an identical architecture up to their respective expansion order, defined by K 1 . The crucial difference comes from their weight values, which correspond to different orders of expansion of the full-precision weights. This is also the case if we want ensembles of three or more predictors. In such instances, instead of only K 1 , we would have K 1 , ..., K m-1 for m predictors.\nConsequently, every predictor shares the same architecture (without biases) up to their respective expansion order. For each m = 1, . . . , M, the m th predictor corresponds to the residuals at orders k ∈ { m-1 l=1 K l + i|i ∈ {1, . . . , K m }}. This ensemble approximation allows to more efficiently parallelize the computations across the expansion orders for improved runtimes. We provide insights on how to set the size of each weak predictor in Appendix A Appendix C Upper Bound Error Theorem Appendix C.1. Let F be a trained L layers sequential DNN with ReLU activation σ L-1 = • • • = σ 1 . We note s l the largest singular value of W l , i.e. the spectral norm of W l . Then we have max\n∥X∥=1 ∥F(X) -F(X) (K) ∥ ∞ ≤ U res U res = L l=1        l i=1 s i u (K) i + 1        -1 (C.1)\nwhere u\n(K) l = 1 2 b-1 -1 K-1 (λ R (K) ) i 2 from equation B.1.\nLemma Appendix C.4. Let f be two layers feed-forward DNN with activation function σ = ReLU. The expected error E f (K) -f (K) due to the ensemble expansion of order K is bounded by U defined as:\nU = K k=1 1 -P f (K) 1 > 0 ∪ f (K) 1,1 > 0 λ f (K) 1,2 λ R (k) 2 ∥R (k) 2 ∥ × E f (K) 1,2 (C.7)\nwhere ∥W∥, for any set of weights W, denotes the norm operator or equivalently the spectral norm.\nProof. By definition of the ReLU activation function, if we have f (K) 1 > 0 then the activation function of f 1 behaves as the identity function. Similarly, if f (K)\n1,1 > 0 then the activation function of f1 also behaves as the identity. Therefore, if we have (\nf (K) 1 > 0) ∪ ( f (K) 1,1 > 0), then f (K) 1 = f (K) 1 . We deduce that E f (K) -f (K) is equal to { f (K) 1 >0∪ f (K) 1,1 >0} C f (K) (x) -f (K) (x) Pdx (C.8)\nwhere A C the complementary set of a set A and x is the input. In the set defined by f\n(K) 1,1 (x) = 0, the value of f (K) 1 (x) is the value of f (K) 1,2 (x). If we also have f (K) 1 (x) = 0 then ∥ f (K) (x) -f (K) (x)∥ = ∥ f (K) 1,2 (x)∥. We can deduce E f (K) 1 -f (K) 1 = 1 -P f (K) 1 > 0 ∪ f (K) 1,1 > 0 × E f (K) 1,2 (C.9)\nThe final result comes from the definition of the norm operator of a matrix and equation B.5.\nAn immediate limit to lemma Appendix C.4 is the difficulty to compute 1-P f\n(K) 1 > 0 ∪ f (K)\n1,1 > 0 . However, this value can be approached under the assumption that the distribution of the activations is symmetrical around 0. Such instances appear with batch normalization layers and result in 1\n-P f (K) 1 > 0 ∪ f (K) 1,1 > 0 ≈ 1 2 .\nWe also propose to compute the operator norm instead of E f (K) 1,2 in order to remain data-free. In consequence, we derive the following corollary.\nCorollary Appendix C.5. The previous upper bound U on the expected error due to the ensemble expansion can be approximated as follows\nU ≈ 1 2 K k=1 ∥R (k) 2 ∥λ f (K) 1,2 λ R (k) 2 K k=K 1 ∥R (k) 1 ∥λ x λ R (k) 1 (C.10)\nIn practice, for expansion in b bits, with high values of b (e.g. b ≥ 4), the single operator R (1) is enough to satisfy equation B.4 and K 1 = 1. For lower values of b (e.g. ternary quantization), the suitable value for K 1 depends on the DNN architecture, usually ranging from 3 to 6. with the baseline model, see Fig 4). For int4 quantization, we report results at order 4 with an ensemble of 2 predictors, each containing two orders, i.e. K 1 = K 2 = 2. Using this setup ensures that the inference run-times are comparable (see Fig 4). First, given a budget of bit-wise operations that achieves equivalent expected run-time (Fig 4 ), PIPE can achieve higher accuracy than existing approaches: e.g. on both MobileNet V2 and ResNet, in int4, PIPE outperforms SQuant [19], the best data-free method that does not involve data-generation (DG), by 16.3 points and 7.5 respectively. Note that other methods such as OCS (also involving structural changes in the neural network architecture) considerably underperforms, especially on int4 where the accuracy drops to 0.1. PIPE , however, fully preserves the floating point accuracy.\nSecond, PIPE even outperforms the impressive (yet time consuming) data-generation (DG) based methods such as ZeroQ [17], DSG [18], GDFQ [56] and MixMix [57]. The difference is more noticeable on low bit quantization, e.g. b = 4. Nevertheless, PIPE improves the accuracy on this benchmark by 6.35% top-1 accuracy. Similarly, to the results on MobileNet V2, on ResNet-50, PIPE reaches accuracies very close the full precision model (76.13), significantly outperforming its closest contender, MixMix (74.58)." }, { "figure_ref": [], "heading": "Appendix H Operation Head-count", "publication_ref": [ "b57" ], "table_ref": [], "text": "Let W be the real-valued weights of a d × d convolutional layer on input feature maps of shape D × D × n i and n o outputs and stride s. Then the convolutional product requires d 2 D 2 s 2 n i n o floating point multiplications. The quantized layer requires two rescaling operations (for the quantization of the inputs and the Q -1 operation) and an int-b convolution, i.e. n i D 2 + D 2 s 2 n o floating point multiplications and d 2 D 2 s 2 n i n o int-b multiplications. Note that the number of additions remains unchanged. According to [58] the lowest complexity for b-digits scalar multiplication is o(b log(b)) bit operations. This is theoretically achieved using Harvey-Hoeven algorithm (also the asymptomatic bound has yet to be proved). We use this value as it is the least favorable setup for the proposed method. As a consequence the number O original bit operations required for the original layer, O R (1) the number of bit operations for the naively quantized layer and O R (k) for the i th order residual quantization expansion are Using this result we can estimate the maximum order of expansion before which the number of operations in f (k) exceeds the O baseline . Note that in the case of fully-connected layers, D = 1, s = 1 and d = 1. This metric doesn't consider the fact that the added operations can be performed in parallel.\n             O original = D 2 d 2 n i n o" }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "We consider quantized expansions of networks with M predictors such that Σ M m=1 K m = K (K 1 is the number of orders in the first predictor of the ensemble) and γ the sparsity factor. The larger K 1 , the lower the difference between the ensemble f (K) and the developed network f (K) . Conversely, the more balanced the elements of the ensemble, the more runtime-efficient the ensemble: thus, K 1" }, { "figure_ref": [], "heading": "Appendix B.1 Ensemble of two Layers DNNs", "publication_ref": [], "table_ref": [], "text": "Let F be a feed-forward DNN with two layers f 1 , f 2 and σ a piece-wise affine activation function (e.g. ReLU). Given (R (k) 1 ) k=1...K and b 1 the kernel and bias weights of the first layer f (K)\nProof. Let's consider L = 2, and F : X → Bσ(Ax). For any X in the domain of F such that ∥X∥ = 1, we have\nwhere s B is the largest singular value of B and s A is the largest singular value of A. Following the definition of the 2-norm and ∞-norm, we get that\nwhere s A-A (K) is the largest singular value of the residual error of order K, A -A (K) and u (K) A is derived from equation B.1. Consequently, we get\nSparse Expansion.\nWe note s l the largest singular value of W l , i.e. the spectral norm of W l . Then we have max\nwhere u (K)\nThis result is directly derived from Theorem Appendix C.1.\nEnsemble Expansion. with the following theorem:\nLet F be a L layers feed-forward DNN with ReLU activation. The expected error E F (K) -F(K) due to the ensemble expansion at order K with M predictors is bounded by U ens which can be approximated as:\nThe upper bound U ens is directly deduced from the largest eigenvalues and reduces as the size of the expansion diminishes. Moreover, the larger K 1 the faster the convergence, which is intuitive as in our approximation σ(x + ϵ) ≈ σ(x) + σ(ϵ) relies on the fact that x >> ϵ. Thus, the ensembling approximation is a way to find new trade-offs in terms of accuracy and parallelization of the inference. To sum it up, any deep neural network can be approximated by an ensemble expansion of quantized networks, with theoretical guarantees of the approximation error. In practice, as we will showcase in the experiments, this ensemble approximation from expansion of residual errors leads to superior performances in terms of accuracy-inference time trade-off.\nWe provide the following intermediate result regarding two-layers DNNs.\nEnsembling with more Layers. We generalize this result to any feed-forward neural network f with L > 2 layers and activation functions σ 1 , ..., σ L-1 . Equation B.3 becomes:\nwhere\n. We reason by induction on the layer L -1. Similarly to what precedes, we assume that we have g(K)\nL-2 , ..., g(K) 1 and h(K) L-2 , ..., h(K) 1 such that:\nIn order to simplify the composition notations, we note X f the input of a function f. With this in mind Eq. B.5 becomes:\nThe key element is the definition of g(K)\nL-1 and h(K) L-1 , which are obtained by applying equation B.5 two times, on g (K)\nL-2 and h (K) L-2 independently. This is detailed in Appendix Appendix B. We showed that we can express f (K) as a sum of two quantized DNNs g and h. The first predictor g is equal to the expansion f (K 1 ) of f at order K 1 while h is equal to the expansion of f (K)f (K 1 ) at order K -K 1 . This result can be extended to rewrite f as an ensemble of M predictors, by selecting K 1 , ..., K M such that M m=1 K m = K: in such a case, the M th predictor will be the expansion of f (K)f ( M-1 m=1 K m ) at order K M . The proof of theorem Appendix C.3 follows from lemma Appendix C.4 and corollary Appendix C.5. We derive the exact formula for the upper bound U in the general case of L layers feed forward neural networks\nwhere\nThis is a consequence of the definition of the operator norm and the proposed ensembling. The approximation is obtained under the same assumptions and with the same arguments as provided in Corollary Appendix C.5." }, { "figure_ref": [], "heading": "Appendix D Implementation Details and Datasets", "publication_ref": [ "b42", "b43", "b44", "b27", "b0", "b1", "b54", "b51", "b3", "b45", "b13", "b18", "b16", "b18", "b14", "b13", "b18" ], "table_ref": [], "text": "We validate the proposed method on three challenging computer vision tasks which are commonly used for comparison of quantization methods. First, we evaluate on ImageNet [43] (≈ 1.2M images train/50k test) classification. Second, we report results on object detection on Pascal VOC 2012 [44] (≈ 17k images in the test set). Third, we benchmark on image segmentation on CityScapes dataset [45] (500 validation images).\nIn our experiments, we used MobileNets [28] and ResNets [1] on ImageNet. For Pascal VOC object detection we employed an SSD [2] architecture with MobileNet backbone. On CityScapes we used DeepLab V3+ [55] with MobileNet backbone. We also test our method on VGG 16 [52] and transformers such as BERT model [4] on GLUE [46] In our experiments, the inputs and activations are quantized using the same method as [14]. The number of bit-wise operation in our evaluation metric is discussed in Appendix Appendix H.\nFor SQuant [19], we use our own implementation, which achieve different accuracy results due to different initial accuracies for baseline models. As for ZeroQ [17], we use results provided by SQuant [19]. Similarly to prior work [15,14,19], we denote W•/A• the quantization setup (number of bits for weight quantization and number of bit for activation quantization).\nWe used Tensorflow implementations of the baseline models from the official repository when possible or other publicly available resources when necessary. MobileNets and ResNets for Im-ageNet come from tensorflow models zoo. In object detection, we tested the SSD model with a MobileNet backbone from Manish's git repository. Finally, in image semantic segmentation, the DeepLab V3+ model came from Bonlime's git repository.\nThe networks pre-trained weights provide standard baseline accuracies on each tasks. The computations of the residues as well as the work performed on the weights were done using the Numpy python's library. As listed in Table D.7, the creation of the quantized model takes less than a second for a MobileNet V2 as well as for a ResNet 152 without any optimization of the quantization process. These results were obtained using an Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz." }, { "figure_ref": [], "heading": "Appendix E Pre-Processing Time", "publication_ref": [ "b16", "b17", "b55", "b56" ], "table_ref": [], "text": "As shown on Table E.8, in terms of quantization processing time, methods relying on data generation (DG) such as ZeroQ [17], DSG [18], GDFQ [56] and MixMix [57] are slow as they" }, { "figure_ref": [], "heading": "Appendix F.1 More Results on ImageNet", "publication_ref": [], "table_ref": [], "text": "In Table F.9, we provide complementary results to the comparison at equivalent bit-width using ternary quantization. We see that the results are stable across a wide range of architecture.\nIn the following sections, we show that PIPE is also very flexible and can be straightforwardly applied to other computer vision tasks, e.g. object detection and semantic segmentation." }, { "figure_ref": [], "heading": "Appendix F.2 Object Detection", "publication_ref": [ "b13", "b13", "b13" ], "table_ref": [], "text": "In Fig F .8, we report the performance of PIPE (as well as DFQ [14] for int8 quantization) using SSD-MobileNet as a base architecture for object detection. Overall, we observe a similar trend as in Section 4.3: PIPE reaches significantly lower numbers of bit-wise operations than the naive baseline (R k=1 γ=100% ) and state-of-the-art DFQ [14] while preserving the full model accuracy, using either int4, int3 or ternary quantization. Also, once again, the best results are obtained using ternary quantization with high orders (e.g. k = 8) and sparse residuals (e.g. γ = 25%): as such, the mAP of the best tested configuration, R (8) 25% , reaches 68.6% for 6.38e 9 bit-wise operations, vs. 67.9% for 3.36e 10 bit-wise operations for DFQ [14]." }, { "figure_ref": [], "heading": "Appendix F.3 Semantic Segmentation", "publication_ref": [], "table_ref": [], "text": "In Fig F .8, we report the performance of PIPE for image segmentation using a DeepLab v3+ architecture. Similarly to the previous tasks, PIPE is able to very efficiently quantize a semantic segmentation network, whether it is in int4 or higher (where order 2 is sufficient to reach the full precision mIoU), or in int3/ternary quantization. In the latter case, once again, it is better to use sparse, high order expansions: for instance, we were able to retrieve the full accuracy using R (9) 50% and ternary quantization, dividing by 10 the number of bit-wise operations as compared to the original model. This demonstrates the robustness of PIPE to the task and architecture." }, { "figure_ref": [], "heading": "Appendix G More Parallelization Results", "publication_ref": [ "b15", "b13", "b14", "b16", "b17", "b55", "b56" ], "table_ref": [], "text": "In Table G.10, we compare PIPE and the state-of-the-art data-free quantization methods OCS [16], DFQ [14], SQNR [15] and methods that involve synthetic data, such as ZeroQ [17], DSG [18], GDFQ [56] and MixMix [57]. We report results on int8 and int4 quantization: for the former case, we use order 2 residual expansion without ensembling (as the inference time are equivalent" } ]
Deep neural networks (DNNs) are ubiquitous in computer vision and natural language processing, but suffer from high inference cost. This problem can be addressed by quantization, which consists in converting floating point operations into a lower bit-width format. With the growing concerns on privacy rights, we focus our efforts on data-free methods. However, such techniques suffer from their lack of adaptability to the target devices, as a hardware typically only support specific bit widths. Thus, to adapt to a variety of devices, a quantization method shall be flexible enough to find good accuracy v.s. speed trade-offs for every bit width and target device. To achieve this, we propose PIPE , a quantization method that leverages residual error expansion, along with group sparsity and an ensemble approximation for better parallelization. PIPE is backed off by strong theoretical guarantees and achieves superior performance on every benchmarked application (from vision to NLP tasks), architecture (ConvNets, transformers) and bit-width (from int8 to ternary quantization).
PIPE : Parallelized Inference Through Post-Training Quantization Ensembling of Residual Expansions
[ { "figure_caption": "Figure 1 :1Figure 1: Illustration of the proposed method for a two-layers neural network. (a) residual expansion at order 4: the intensity of the colormap indicates the magnitude of the residual error. (b) group-sparse expansion for orders k ≥ 1 (γ = 50% sparsity). (c) ensemble expansion with two predictors. Dashed lines indicate parallel computations.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Example of ensemble expansion with groupings [K 1 , . . . , K M ] for a residual network. The original network is broken down into M networks, each having exactly the same architecture as the original network, except that (a) the weights for a layer l of network m ∈ {1, . . . , M} correspond to the residual expansion terms R K 1 +•••+K m-1 +1", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Accuracy vs. inference time, for EfficientNet B0. The higher (accuracy) and the further to the left (inference cost) the better. The pentagons show the baseline results with W3/A3, W4/A4, W5/A5 and W6/A6 quantization. The dashed lines show the trade-offs in performance of PIPE in W4/A4 and ternary quantization. Finally, the plain lines show PIPE (with sparsity) also in W4/A4 and ternary quantization. The numbers in the symbols stands for the expansion order. PIPE , and a fortiori the sparse version, enables better trade-offs.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Standardized inference time on ImageNet of different architectures. We demonstrate that parallelization of the overhead computations brought by the proposed ensemble approximation drastically reduces their impact on runtime on an intel m3 CPU.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "1 Figure A. 5 :15Figure A.5: Comparison between the expected empirical error from ensembling E[∥ f (K) -f (K) ∥] (green) and its upper bound U (Lemma Appendix C.3, orange) for different values of K 1 on a ResNet 50 trained on ImageNet and quantized with ternary values and K = 13, γ = 25%. We also plot the reference E[∥ f (K) ∥] (grey).", "figure_data": "", "figure_id": "fig_4", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Figure A. 6 :6Figure A.6: Comparison between ensemble expansion f (K) (in orange) and regular expansion f (K) (blue) on ImageNet. We test different bit representations, namely ternary (TNN) and int4 as well as different values for K 1 . Except for very low values of the ratio K 1 /K, we observe the robustness of the ensembling method.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure A. 7 :7Figure A.7: Comparison between TNN ensemble expansion f (K) (in orange) and regular expansion f (K) (blue) on ImageNet.", "figure_data": "", "figure_id": "fig_8", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure F. 8 :8Figure F.8: (a) Mean average precision (mAP) of a SSD with MobileNet V2 backbone on Pascal VOC for object detection. We add the performance of a data-free quantization solution, DFQ [14] for comparison. (b) Mean intersection over union (mIoU) of a Deeplab V3+ with MobileNet V2 backbone on CityScapes for semantic segmentation.", "figure_data": "", "figure_id": "fig_9", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "s 2 3232log(32) O R (1) = D 2 (n i + n o s 2 )32 log(32) + d 2 n i n o s 2 b log(b) O R (k-1) = D 2 (n i + n o s 2 )32 log(32) + k d 2 n i n o s 2 b log(b) (H.1)", "figure_data": "", "figure_id": "fig_10", "figure_label": "32", "figure_type": "figure" }, { "figure_caption": "Comparison at equal BOPs (i.e. no-parallelization) with existing methods in W6/A6 and PIPE with W4/A6 +50% of one 4 bits residue. In all tested configurations, distributing the computations between the residuals in a lower bit format enables to find superior trade-offs.", "figure_data": "DNNmethodyearbitsAccuracyfull-precision76.15DFQ ICCV '19W6/A671.36ZeroQ CVPR '20W6/A672.93ResNet 50DSG CVPR '21 GDFQ ECCV '20W6/A6 W6/A674.07 74.59SQuant ICLR '22W6/A675.95SPIQ WACV '23W6/A675.98PIPE-150% × W4/A6 76.01full-precision71.80DFQ ICCV '19W6/A645.84MobNet v2SQuant ICLR '22W6/A661.87SPIQ WACV '23W6/A663.24PIPE-150% × W4/A6 64.20full-precision77.10EffNet B0DFQ ICCV '19 SQuant ICLR '22W6/A6 W6/A643.08 54.51PIPE-150% × W4/A6 57.63", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "GLUE task quantized in W4/A8. We consider the BERT transformer architecture[4] and provide the original performance (from the article) of BERT on GLUE as well as our reproduced results (reproduced). PIPE is applied to the weights with 3 bits + 33% sparse expansion.", "figure_data": "task original reproduceduniform log SQuant SPIQ PIPECoLA 49.2347.9045.60 45.67 46.88 46.23 47.02SST-2 91.9792.3291.81 91.53 91.09 91.01 91.88MRPC 89.4789.3288.24 86.54 88.78 88.78 88.71STS-B 83.9584.0183.89 84.01 83.80 83.49 83.92QQP88.4090.7789.56 90.30 90.34 90.30 90.50MNLI 80.6180.5478.96 78.96 78.35 78.52 79.03QNLI 87.4691.4789.36 89.52 90.08 89.64 90.08RTE61.7361.8260.96 60.46 60.21 60.21 61.20WNLI 45.0743.7639.06 42.19 42.56 42.12 42.63on object detection and image segmentation in Appendix F.Second, In Table", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Accuracy for MobileNet V2 on ImageNet with 8 bits activations. PIPE uses ensembling of two weak predictors each sharing the same number of bit operations and similar runtime, based on Figure4.", "figure_data": "methodyearno-DG bits accuracyfull-precision71.80OCSICML 2019✓W8/A8 71.10DFQICCV 2019✓W8/A8 70.92SQNR ICML 2019✓W8/A871.2ZeroQ CVPR 2020✗W8/A8 71.61SPIQ WACV 2023✓W8/A8 71.79GDFQ ECCV 2020✗W8/A8 71.80PIPE-✓W8/A8 71.80DFQICCV 2019✓W4/A80.10ZeroQ CVPR 2020✗W4/A8 49.83GDFQ ECCV 2020✗W4/A8 51.30SQuant ICLR 2022✓W4/A8 55.38MixMix CVPR 2021✗W4/A8 65.38AITCVPR 2022✗W4/A8 66.47IntraQ CVPR 2022✗W4/A8 65.10PIPE-✓W4/A8 71.73", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "", "figure_data": "bits K sparsity ensembleUU empirical8 1✗✗0.120.058 4✗✗1.99 ×10 -7 1.78 ×10 -78 2 50%✗0.060.058 4 50%✗1.17 ×10 -7 0.65 ×10 -78 2✗✓0.090.028 4✗✓0.47 ×10 -4 0.43 ×10 -4", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "We report the different trade-offs achieved with PIPE expanding over different proposed quantization operators in W4/A4 as compared to their performance in W8/A8, on a MobileNet V2.", "figure_data": "method W4 W4 + 25% W4 + 50% W4 + 75% W6 W8naive0.1 53.11 64.20 71.61 51.47 70.92SQuant 4.23 58.64 67.43 71.74 60.19 71.68SPIQ5.81 59.37 68.82 71.79 63.24 71.79AdaRound 6.17 60.30 69.80 71.77 68.71 71.75BrecQ 66.57 70.94 71.28 71.76 70.45 71.76", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "7: Processing time on an Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz of the proposed method for different configurations and architectures trained on ImageNet and a quantization in TNN. We note 'm' for minutes and 's' for seconds.", "figure_data": "k ensembling ResNet 152 MobileNet v2 (1.4)1✗0.32s0.12s2✗0.43s0.13s2✓0.43s0.13s7✗0.90s0.51s7✓0.92s0.51s", "figure_id": "tab_6", "figure_label": "D", "figure_type": "table" }, { "figure_caption": "8: PIPE , MixMix, GDFQ and ZeroQ 4-bit quantization time in seconds. PIPE was quantized such that full-precision accuracy is reached.", "figure_data": "ModelZeroQ GDFQ MixMix PIPEResNet 5092.111.10 3 18.10 3<1ResNet 101164.0 18.10 3 25.10 3<1ResNet 152246.4 24.10 3 30.10 31.1MobileNet V2 (0.35) 27.43.10 36.10 3<1MobileNet V2 (1)37.97.10 312.10 3<1Table F.9: Comparison at equivalent bit-width (i.e. no-parallelization) with existing methods in W8/A8 andPIPE with W2/A2 with K = 4.DNN methodyearbitsAccDNN methodyearbitsAccfull-precision76.15full-precision71.80DFQ ICCV 2019 ZeroQ CVPR 20208 875.45 75.89MobNet v2DFQ ICCV 2019 SQuant ICLR 20228 870.92 71.68ResNet 50DSG CVPR 2021 GDFQ ECCV 20208 875.87 75.71PIPE-full-precision 400% ×2 71.65 77.10SQuant ICLR 2022 SPIQ WACV 20238 876.04 76.15EffNet B0DFQ ICCV 2019 SQuant ICLR 20228 876.89 76.93PIPE-400% ×2 76.15PIPE-400% ×2 76.95usually require many forward-backward passes to quantize a trained neural network. PIPE , on theother hand, is very fast in addition to being better at preserving the original model accuracy withtheoretical control over the error introduced by quantization.Appendix F Other Results on ConvNets", "figure_id": "tab_7", "figure_label": "E", "figure_type": "table" }, { "figure_caption": "10: Accuracy for ResNet 50 on ImageNet. PIPE (with ensembling) achieves excellent accuracy for both standard and low-bit quantization, more-so using high-order sparse expansions, vastly outperforming previous state-of-the-art data-free quantization approaches such as OCS, DFQ, SQNR and SQuant, and even approaches that require fine-tuning as MixMix, ZeroQ, DSG and GDFQ.", "figure_data": "methodyearno-DG b accuracymethodyearno-DG b accuracyfull-precision76.15full-precision76.15OCSICML 2019✓875.70OCSICML 2019✓40.1DFQiCCV 2019✓876.00DSGCVPR 2021✗423.10SQNR ICML 2019✓875.90GDFQ ECCV 2020✗455.65ZeroQ CVPR 2020✗875.89SQuant ICLR 2022✓468.60DSGCVPR 2021✗875.87MixMix CVPR 2021✗474.58SQuant ICLR 2022✓876.04AITCVPR 2022✗466.47PIPE-✓876.15PIPE-✓476.13", "figure_id": "tab_8", "figure_label": "G", "figure_type": "table" } ]
Edouard Yvinec; Arnaud Dapogny; Kevin Bailly
[ { "authors": "K He; X Zhang", "journal": "", "ref_id": "b0", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "W Liu; D Anguelov; D Erhan; C Szegedy; S Reed; C.-Y Fu; A C Berg", "journal": "Springer", "ref_id": "b1", "title": "Ssd: Single shot multibox detector", "year": "2016" }, { "authors": "L.-C Chen; G Papandreou; F Schroff; H Adam", "journal": "", "ref_id": "b2", "title": "Rethinking atrous convolution for semantic image segmentation", "year": "2017" }, { "authors": "J Devlin; M.-W Chang; K Lee; K Toutanova; Bert ", "journal": "", "ref_id": "b3", "title": "Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "R Krishnamoorthi", "journal": "", "ref_id": "b4", "title": "Quantizing deep convolutional networks for efficient inference: A whitepaper", "year": "2018" }, { "authors": "M Courbariaux; I Hubara", "journal": "NeurIPS", "ref_id": "b5", "title": "Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1", "year": "2016" }, { "authors": "S Wu; G Li; F Chen; L Shi", "journal": "ICLR", "ref_id": "b6", "title": "Training and inference with integers in deep neural networks", "year": "2018" }, { "authors": "B Jacob; S Kligys; B Chen; M Zhu; M Tang; A Howard; H Adam; D Kalenichenko", "journal": "", "ref_id": "b7", "title": "Quantization and training of neural networks for efficient integer-arithmetic-only inference", "year": "2018" }, { "authors": "J Achterhold; J M Koehler; A Schmeink; T Genewein", "journal": "ICLR", "ref_id": "b8", "title": "Variational network quantization", "year": "2018" }, { "authors": "C Louizos; M Reisser; T Blankevoort; E Gavves; M Welling", "journal": "ICLR", "ref_id": "b9", "title": "Relaxed quantization for discretized neural networks", "year": "2018" }, { "authors": "T Sheng; C Feng; S Zhuo; X Zhang; L Shen; M Aleksic", "journal": "IEEE", "ref_id": "b10", "title": "A quantization-friendly separable convolution for mobilenets", "year": "2018" }, { "authors": "K Choi; H Y Lee; D Hong; J Yu; N Park; Y Kim; J Lee", "journal": "", "ref_id": "b11", "title": "It's all in the teacher: Zero-shot quantization brought closer to the teacher", "year": "2022" }, { "authors": "Y Zhong; M Lin; G Nan; J Liu; B Zhang; Y Tian; R Ji", "journal": "", "ref_id": "b12", "title": "Intraq: Learning synthetic images with intra-class heterogeneity for zero-shot network quantization", "year": "2022" }, { "authors": "M Nagel; M V Baalen", "journal": "", "ref_id": "b13", "title": "Data-free quantization through weight equalization and bias correction", "year": "2019" }, { "authors": "E Meller; A Finkelstein; U Almog; M Grobman", "journal": "", "ref_id": "b14", "title": "Same, same but different: Recovering neural network quantization error through weight factorization", "year": "2019" }, { "authors": "R Zhao; Y Hu; J Dotzel; C De Sa; Z Zhang", "journal": "", "ref_id": "b15", "title": "Improving neural network quantization without retraining using outlier channel splitting", "year": "2019" }, { "authors": "Y Cai; Z Yao; Z Dong; A Gholami; M W Mahoney; K Keutzer", "journal": "", "ref_id": "b16", "title": "Zeroq: A novel zero shot quantization framework", "year": "2020" }, { "authors": "X Zhang; H Qin; Y Ding; R Gong; Q Yan; R Tao; Y Li; F Yu; X Liu", "journal": "", "ref_id": "b17", "title": "Diversifying sample generation for accurate data-free quantization", "year": "2021" }, { "authors": "G Cong", "journal": "ICLR", "ref_id": "b18", "title": "Squant: On-the-fly data-free quantization via diagonal hessian approximation", "year": "2022" }, { "authors": " Nvidia", "journal": "", "ref_id": "b19", "title": "Nvidia a100 tensor core gpu architecture", "year": "2021" }, { "authors": "E Yvinec; A Dapgony; M Cord; K Bailly", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b20", "title": "Rex: Data-free residual quantization error expansion", "year": "2023" }, { "authors": "M Rabbani", "journal": "Journal of Electronic Imaging", "ref_id": "b21", "title": "Jpeg2000: Image compression fundamentals, standards and practice", "year": "2002" }, { "authors": "S G Mallat", "journal": "Princeton University Press", "ref_id": "b22", "title": "A theory for multiresolution signal decomposition: the wavelet representation", "year": "2009" }, { "authors": "K Ullrich; E Meeds; M Welling", "journal": "ICLR", "ref_id": "b23", "title": "Soft weight-sharing for neural network compression", "year": "2017" }, { "authors": "S Zhou; Y Wu; Z Ni; X Zhou; H Wen; Y Zou", "journal": "", "ref_id": "b24", "title": "Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients", "year": "2016" }, { "authors": "S Oh; H Sim; S Lee; J Lee", "journal": "", "ref_id": "b25", "title": "Automated log-scale quantization for low-cost deep neural networks", "year": "2021" }, { "authors": "P Stock; B Graham; R Gribonval; H Jégou", "journal": "ICLR", "ref_id": "b26", "title": "Equi-normalization of neural networks", "year": "2019" }, { "authors": "M Sandler; A Howard", "journal": "", "ref_id": "b27", "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "year": "2018" }, { "authors": "D Lin; S Talathi; S Annapureddy", "journal": "PMLR", "ref_id": "b28", "title": "Fixed point quantization of deep convolutional networks", "year": "2016" }, { "authors": "B Feng; Y Wang; T Geng; A Li; Y Ding", "journal": "", "ref_id": "b29", "title": "Apnn-tc: Accelerating arbitrary precision neural networks on ampere gpu tensor cores", "year": "2021" }, { "authors": "C Robinson", "journal": "", "ref_id": "b30", "title": "Untether.ai boqueria 1458 risc-v core ai accelerator", "year": "2022-08" }, { "authors": "Y Zhang; Z Zhang; L Lew", "journal": "", "ref_id": "b31", "title": "Pokebnn: A binary pursuit of lightweight accuracy", "year": "2022" }, { "authors": "T G Dietterich", "journal": "Springer", "ref_id": "b32", "title": "Ensemble methods in machine learning", "year": "2000" }, { "authors": "L Breiman", "journal": "Machine learning", "ref_id": "b33", "title": "Bagging predictors", "year": "1996" }, { "authors": "J Friedman; T Hastie; R Tibshirani", "journal": "The annals of statistics", "ref_id": "b34", "title": "Additive logistic regression: a statistical view of boosting (with discussion and a rejoinder by the authors)", "year": "2000" }, { "authors": "L Breiman", "journal": "", "ref_id": "b35", "title": "Arcing the edge", "year": "1997" }, { "authors": "A Rame; M Cord", "journal": "ICLR", "ref_id": "b36", "title": "Dice: Diversity in deep ensembles via conditional redundancy adversarial estimation", "year": "2021" }, { "authors": "J Guo; S Gould", "journal": "", "ref_id": "b37", "title": "Deep cnn ensemble with data augmentation for object detection", "year": "2015" }, { "authors": "T Y Tan; L Zhang; C P Lim; B Fielding; Y Yu; E Anderson", "journal": "IEEE access", "ref_id": "b38", "title": "Evolving ensemble models for image segmentation using enhanced particle swarm optimization", "year": "2019" }, { "authors": "E Arnaud; A Dapogny; K Bailly", "journal": "TAC", "ref_id": "b39", "title": "Thin: Throwable information networks and application for facial expression recognition in the wild", "year": "2022" }, { "authors": "S Zhu; X Dong; H Su", "journal": "", "ref_id": "b40", "title": "Binary ensemble neural network: More bits per network or more networks per bit?", "year": "2019" }, { "authors": "A Gholami; S Kim; Z Dong; Z Yao; M W Mahoney; K Keutzer", "journal": "", "ref_id": "b41", "title": "A survey of quantization methods for efficient neural network inference", "year": "2021" }, { "authors": "J Deng; W Dong", "journal": "CVPR", "ref_id": "b42", "title": "ImageNet: A Large-Scale Hierarchical Image Database", "year": "2009" }, { "authors": "M Everingham; L Van Gool; C K I Williams; J Winn; A Zisserman", "journal": "", "ref_id": "b43", "title": "The PAS-CAL Visual Object Classes Challenge", "year": "2012" }, { "authors": "M Cordts; M Omran; S Ramos; T Rehfeld; M Enzweiler; R Benenson; U Franke; S Roth; B Schiele", "journal": "", "ref_id": "b44", "title": "The cityscapes dataset for semantic urban scene understanding", "year": "2016" }, { "authors": "A Wang; A Singh; J Michael; F Hill; O Levy; S Bowman", "journal": "", "ref_id": "b45", "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "year": "2018" }, { "authors": "E Yvinec; A Dapogny; K Bailly", "journal": "", "ref_id": "b46", "title": "To fold or not to fold: a necessary and sufficient condition on batchnormalization layers folding", "year": "2022" }, { "authors": "", "journal": "", "ref_id": "b47", "title": "Intel, Intel® distribution of openvino™ toolkit", "year": "2022" }, { "authors": " ", "journal": "", "ref_id": "b48", "title": "Nvidia distribution of tensorrt toolkit", "year": "2022" }, { "authors": "D Miyashita; E H Lee; B Murmann", "journal": "", "ref_id": "b49", "title": "Convolutional neural networks using logarithmic data representation", "year": "2016" }, { "authors": "E Yvinec; A Dapogny; M Cord; K Bailly", "journal": "WACV", "ref_id": "b50", "title": "Spiq: Data-free static input quantization", "year": "2023" }, { "authors": "K Simonyan; A Zisserman", "journal": "BMVC", "ref_id": "b51", "title": "Very deep convolutional networks for large-scale image recognition", "year": "2014" }, { "authors": "M Nagel; R A Amjad; M Van Baalen; C Louizos; T Blankevoort", "journal": "PMLR", "ref_id": "b52", "title": "Up or down? adaptive rounding for post-training quantization", "year": "2020" }, { "authors": "Y Li; R Gong; X Tan; Y Yang; P Hu; Q Zhang; F Yu; W Wang; S Gu", "journal": "", "ref_id": "b53", "title": "Brecq: Pushing the limit of posttraining quantization by block reconstruction", "year": "2021" }, { "authors": "L.-C Chen; Y Zhu; G Papandreou; F Schroff; H Adam", "journal": "", "ref_id": "b54", "title": "Encoder-decoder with atrous separable convolution for semantic image segmentation", "year": "2018" }, { "authors": "S Xu; H Li; B Zhuang; J Liu; J Cao; C Liang; M Tan", "journal": "Springer", "ref_id": "b55", "title": "Generative low-bitwidth data free quantization", "year": "2020" }, { "authors": "Y Li; F Zhu; R Gong; M Shen; X Dong; F Yu; S Lu; S Gu", "journal": "", "ref_id": "b56", "title": "Mixmix: All you need for data-free compression are feature and data mixing", "year": "2021" }, { "authors": "E Klarreich", "journal": "Communications of the ACM", "ref_id": "b57", "title": "Multiplication hits the speed limit", "year": "2019" } ]
[ { "formula_coordinates": [ 4, 263.12, 502.2, 267.65, 28.53 ], "formula_id": "formula_0", "formula_text": "Q(W l ) = W l s W l(1)" }, { "formula_coordinates": [ 4, 65.11, 569.87, 465.65, 27.68 ], "formula_id": "formula_1", "formula_text": "Q -1 (Q(W l )) = s l × Q(W l )." }, { "formula_coordinates": [ 4, 81.47, 613.21, 84.13, 13.23 ], "formula_id": "formula_2", "formula_text": "W l -Q -1 (Q(W l ))." }, { "formula_coordinates": [ 5, 102.59, 116.92, 391.57, 139.46 ], "formula_id": "formula_3", "formula_text": "F (4) ( x) (a) residual expansion x R (k=1) R (k=2) R (k=3) R (k=4) R (k=1) R (k=2) R (k=3) R (k=4) F (4) ( x) x R γ =0.5 (k=1) R γ =0.5 (k=2) R γ =0.5 (k=3) R γ =0.5 (k=4) R (k=1) R γ =0.5 (k=2) R γ =0.5 (k=3) R γ =0.5 (k=4) F(4) ( x) x R γ =0.5 (k=1) R γ =0.5 (k=2) R γ =0.5 (k=3) R γ =0.5 (k=4) R (k=1) R γ =0.5 (k=2) R γ =0.5 (k=3) R γ =0.5(k=4)" }, { "formula_coordinates": [ 5, 242.63, 367.19, 288.13, 13.39 ], "formula_id": "formula_4", "formula_text": "R 2 = Q -1 Q W -R 1(2)" }, { "formula_coordinates": [ 5, 231.85, 456.96, 298.92, 35.05 ], "formula_id": "formula_5", "formula_text": "R K = Q -1        Q        W - K-1 k=1 R k              (3)" }, { "formula_coordinates": [ 5, 186.25, 650.59, 344.51, 35.05 ], "formula_id": "formula_6", "formula_text": "ϵ max ≤ U = L l=1        l i=1 1 2 b-1 -1 K-1 s R i 2 + 1        -1 (4)" }, { "formula_coordinates": [ 6, 111.4, 402.29, 142.57, 16.68 ], "formula_id": "formula_7", "formula_text": "terms R K 1 +•••+K m-1 +1 l . . . R K 1 +•••+K m l" }, { "formula_coordinates": [ 6, 223.1, 522.54, 307.66, 35.05 ], "formula_id": "formula_8", "formula_text": "R K = M K Q -1        Q        W - K-1 k=1 R k              (5)" }, { "formula_coordinates": [ 7, 245.11, 215.22, 285.65, 11.96 ], "formula_id": "formula_9", "formula_text": "σ(• + ϵ) ≈ σ(•) + σ(ϵ)(6)" }, { "formula_coordinates": [ 7, 64.51, 293.12, 245.36, 12.5 ], "formula_id": "formula_10", "formula_text": "m ∈ [|1, M|] and [K 1 , . . . , K M ] (K 1 + • • • + K M = K" }, { "formula_coordinates": [ 18, 165.25, 400.99, 266.34, 74.46 ], "formula_id": "formula_11", "formula_text": "k 1 =12 k 1 =6 k 1 =1 k 1 =12 k 1 =6 k 1 =1 k 1 =1TNN" }, { "formula_coordinates": [ 19, 189.44, 155.86, 243.44, 101.48 ], "formula_id": "formula_12", "formula_text": "R 25 % (13) R 25 %3,2,2,2,2,2] [2,2,2,2,2,2,1]" }, { "formula_coordinates": [ 19, 284.56, 245.4, 55.14, 11.95 ], "formula_id": "formula_14", "formula_text": "(13) R 25 %(13)" }, { "formula_coordinates": [ 19, 372.48, 245.4, 56.51, 11.95 ], "formula_id": "formula_15", "formula_text": "(13) R 25 %(13)" }, { "formula_coordinates": [ 19, 64.51, 369.82, 74.32, 13.75 ], "formula_id": "formula_16", "formula_text": "E[∥ f (K) -f (K) ∥]." }, { "formula_coordinates": [ 20, 162.47, 140.33, 270.34, 80.63 ], "formula_id": "formula_17", "formula_text": "K m ]. device [8] [4, 4] [3, 3, 2] [2, 2, 2, 2] [1] expansion ✓ ✓ ✓ ✓ ✗ ensembling ✗ ✓ ✓ ✓ ✗ Intel(R) i9-9900K" }, { "formula_coordinates": [ 20, 64.51, 337.88, 117.67, 10.75 ], "formula_id": "formula_18", "formula_text": "Lemma Appendix B.1." }, { "formula_coordinates": [ 20, 222.5, 391.52, 308.27, 35.05 ], "formula_id": "formula_19", "formula_text": "- K k=1 w (k) ≤ 1 2 b-1 -1 K-1 (λ R (K) ) i 2 (B.1)" }, { "formula_coordinates": [ 20, 253.32, 504.57, 277.45, 72.74 ], "formula_id": "formula_20", "formula_text": "       K-1 k=1 w (k) + Q -1 R (K) γ        ≤ N (K) • 1 (K) γ ∞ (λ R (k) ) i 2 b-1 -1 K 2 (B.2)" }, { "formula_coordinates": [ 21, 207.72, 188.57, 323.04, 35.05 ], "formula_id": "formula_21", "formula_text": "f (K) : x → σ        K k=1 R (k) Q(x)λ R (k) λ x + b        . (B.3)" }, { "formula_coordinates": [ 21, 64.51, 247.77, 466.25, 97 ], "formula_id": "formula_22", "formula_text": "K 1 < K such that f (K) 1 ≈ f (K) 1 = f (K) 1,1 + f (K) 1,2 with:            f (K) 1,1 : x → σ K 1 k=1 R (k) 1 x q λ R (k) 1 λ x + b 1 f (K) 1,2 : x → σ K k=K 1 +1 R (k) 1 x q λ R (k) 1 λ x (B.4) Furthermore F (K) : x → f (K) 2 ( f (K) 1 (x)). Let R (k)" }, { "formula_coordinates": [ 21, 205.68, 372.82, 325.08, 74.79 ], "formula_id": "formula_23", "formula_text": "F (K) ≈ F(K) = K k=1 R (k) 2 f (K) 1,1 λ R (k) 2 λ f (K) 1,1 + b 2 + K k=1 R (k) 2 f (K) 1,2 λ R (k) 2 λ f (K) 1,2 (B.5)" }, { "formula_coordinates": [ 21, 158.51, 593.14, 205.07, 18.91 ], "formula_id": "formula_24", "formula_text": "(K) L-1 (x) = σ L-1 K k=1 R (k) L-1 λ R (k) L-1 X f (K) L-1 + b L-1 ." }, { "formula_coordinates": [ 21, 115.22, 612.86, 415.54, 61.56 ], "formula_id": "formula_25", "formula_text": "K L-1 < K X f (K) L-1 →σ L-1        K L-1 k=1 R (k) L-1 X f (K) L-1 λ X f (K) L-1 λ R (k) L-1 + b L-1        + K k=K L-1 +1 R (k) L-1 X f (K) L-1 λ X f (K) L-1 λ R (k) L-1 (B.6)" }, { "formula_coordinates": [ 22, 165.9, 112.3, 119.04, 18.62 ], "formula_id": "formula_26", "formula_text": ")λ g(K) L-2 >> h(K) L-2 (X h(K) L-2 )λ h(K) L-2" }, { "formula_coordinates": [ 22, 75.13, 142.35, 455.63, 76.97 ], "formula_id": "formula_27", "formula_text": "f (K) L-1 (x) = σ L-1 K L-1 k=1 R (k) L-1 g(K) L-2 (X g(K) L-2 ) × λ R (k) L-1 λ g(K) L-2 + b L-1 + K k=K L-1 +1 R (k) L-1 g(K) L-2 (X g(K) L-2 )λ R (k) L-1 λ g(K) L-2 + σ L-1 K L-1 k=1 R (k) L-1 h(K) L-2 (X h(K) L-2 ) × λ R (k) L-1 λ h(K) L-2 + K k=K L-1 +1 R (k) L-1 h(K) L-2 (X h(K) L-2 )λ R (k) L-1 λ h(K) L-2 (B.7)" }, { "formula_coordinates": [ 22, 309.76, 230.81, 141.02, 16.45 ], "formula_id": "formula_28", "formula_text": "L-1 such that f (K) L-1 = g(K) L-1 + h(K)" }, { "formula_coordinates": [ 22, 142.87, 272.35, 387.89, 114.64 ], "formula_id": "formula_29", "formula_text": "                                   g(K) L-1 (X g(K) L-2 ) = σ L-1 K L-1 k=1 R (k) L-1 g(K) L-2 (X g(K) L-2 ) × λ R (k) L-1 λ g(K) L-2 + b L-1 + K k=K L-1 +1 R (k) L-1 g(K) L-2 (X g(K) L-2 )λ R (k) L-1 λ g(K) L-2 h(K) L-1 (X h(K) L-2 ) = σ L-1 K L-1 k=1 R (k) L-1 h(K) L-2 (X h(K) L-2 ) × λ R (k) L-1 λ h(K) L-2 + K k=K L-1 +1 R (k) L-1 h(K) L-2 (X h(K) L-2 )λ R (k) L-1 λ h(K) L-2 (B.8)" }, { "formula_coordinates": [ 22, 224.49, 649.05, 306.27, 60.69 ], "formula_id": "formula_30", "formula_text": "∥X∥=1 ∥F(X) -F(X) (K) ∥ ∞ ≤ U res U res = L l=1        l i=1 s i u (K) i + 1        -1 (C.1)" }, { "formula_coordinates": [ 22, 102.25, 716.17, 199.67, 20.62 ], "formula_id": "formula_31", "formula_text": "(K) l = 1 2 b-1 -1 K-1 (λ R (K) ) i 2 from equation B.1." }, { "formula_coordinates": [ 24, 175.72, 156.5, 355.05, 55.95 ], "formula_id": "formula_32", "formula_text": "U = K k=1 1 -P f (K) 1 > 0 ∪ f (K) 1,1 > 0 λ f (K) 1,2 λ R (k) 2 ∥R (k) 2 ∥ × E f (K) 1,2 (C.7)" }, { "formula_coordinates": [ 24, 64.51, 274.22, 466.25, 78.32 ], "formula_id": "formula_33", "formula_text": "f (K) 1 > 0) ∪ ( f (K) 1,1 > 0), then f (K) 1 = f (K) 1 . We deduce that E f (K) -f (K) is equal to { f (K) 1 >0∪ f (K) 1,1 >0} C f (K) (x) -f (K) (x) Pdx (C.8)" }, { "formula_coordinates": [ 24, 64.51, 364.98, 466.25, 101.3 ], "formula_id": "formula_34", "formula_text": "(K) 1,1 (x) = 0, the value of f (K) 1 (x) is the value of f (K) 1,2 (x). If we also have f (K) 1 (x) = 0 then ∥ f (K) (x) -f (K) (x)∥ = ∥ f (K) 1,2 (x)∥. We can deduce E f (K) 1 -f (K) 1 = 1 -P f (K) 1 > 0 ∪ f (K) 1,1 > 0 × E f (K) 1,2 (C.9)" }, { "formula_coordinates": [ 24, 454.06, 499.87, 64.98, 16.54 ], "formula_id": "formula_35", "formula_text": "(K) 1 > 0 ∪ f (K)" }, { "formula_coordinates": [ 24, 87.88, 543.21, 148.73, 16.87 ], "formula_id": "formula_36", "formula_text": "-P f (K) 1 > 0 ∪ f (K) 1,1 > 0 ≈ 1 2 ." }, { "formula_coordinates": [ 24, 199.98, 627.95, 330.78, 35.91 ], "formula_id": "formula_37", "formula_text": "U ≈ 1 2 K k=1 ∥R (k) 2 ∥λ f (K) 1,2 λ R (k) 2 K k=K 1 ∥R (k) 1 ∥λ x λ R (k) 1 (C.10)" }, { "formula_coordinates": [ 29, 170.88, 558.02, 90.18, 49.87 ], "formula_id": "formula_38", "formula_text": "             O original = D 2 d 2 n i n o" } ]
2023-11-27
[ { "figure_ref": [ "fig_0" ], "heading": "", "publication_ref": [ "b26", "b59", "b32", "b20", "b71", "b34", "b86", "b56", "b28", "b44", "b30", "b62", "b87", "b88", "b22", "b82", "b14", "b23", "b62", "b18", "b31", "b33", "b81", "b8", "b9", "b56", "b62" ], "table_ref": [], "text": "In recent times, the field of remote sensing (RS) imaging has witnessed remarkable advancements, revolutionizing Earth's surface monitoring for diverse applications, such as urban planning [27] and environmental monitoring [60]. Representation learning, a common approach in this domain, involves pre-training a model on a large image dataset, like ImageNet [33], using a supervised approach, showcasing significant improvements in various downstream tasks. However, the effectiveness of traditional deep learning models in analyzing complex RS images and outperforming ad-hoc machine learning methods is hampered by challenges in generalization when faced with domain shifts.\nTo tackle these challenges, researchers have explored the areas of domain adaptation (DA) [21,72], which entails capturing representations of the target domain during model adaptation, drawing from knowledge obtained from the target distribution during training, and domain generalization (DG) [35,87] addresses the practical scenario where the inclusion of RS data covering vast diversity during model training becomes arduous. DG capitalizes on labeled data from multiple source domains to learn a versatile and universal representation, striving to build an accurate model that adeptly handles any \"unseen\" target domain. Despite the resounding success of DG in the computer vision field, its potential application in the context of RS imagery remains relatively unexplored.\nLarge multimodal foundation models, such as CLIP [57], ALIGN [29], and ViLBERT [45], have demonstrated impressive performance in downstream tasks, even with limited training data under zero-shot or few-shot learning scenarios. These models establish connections between image and text pairs through contrastive learning and effectively fine-tuning with hand-crafted text prompts. This success has paved the way for two exciting research directions. The first direction focuses on adapting pre-trained VLMs to diverse downstream tasks thus venturing into the area of transfer learning. Popular transfer approaches, such as prompt tuning [31,63,88,89] and visual adaptation [23,83], also strive to achieve the desired objectives. On the other hand, the second direction explores knowledge distillation [15,24], enhancing VLMs' performance in downstream tasks like object detection and semantic segmentation. Remote sensing images often face adverse conditions, like high altitudes or cloudiness, making it challenging to classify them using pre-trained CNN models. To address this, APPLeNet [63] leverages prompt learning to generalize across remote sensing domains. Additionally, it employs context redundancy penalizing (CRP) loss to reduce redundancy between context tokens.\nHowever, this paper dives into the promising area of self supervised learning, a technique that has garnered popularity due to its accomplishments in language and vision [19,32,34,82]. Selfsupervised learning offers an attractive alternate to supervised pre-training, guiding models to learn better embedding spaces. It has the potential to significantly enhance representation learning in RS image analysis, bolstering the generalization capabilities of models across diverse environments. By leveraging self-supervised learning and contrastive methods [9,10], our work emphasizes the prospects of advancing RS image analysis through domain generalization techniques, contributing to the domain's evolution and yielding valuable insights into RS applications. In order to incorporate the SSL task, we create the patches of the input image and then jumble it before passing through the pre-trained vision encoder to get the contextual latent embeddings.\nAs CLIP [57] is trained on massive datasets of image-text pairs, it is able to learn the relationship between images and their corresponding text descriptions, thus gaining recognition for its remarkable prowess. However, despite its success, CLIP encounters a challenge in that it struggles to discern the positional relationships among different parts of an image, leading to occasional difficulties in processing jumbled imagery. Let us assume a jumbled image where various parts have been intentionally scrambled, presenting a captivating puzzle for CLIP to solve. In such intriguing scenarios, VLMs are not able to rely solely on its pre-trained knowledge to identify distinct part embeddings of the image. Unfortunately, acquiring sufficient data for fine-tuning a VLM to conquer this challenge is not always feasible.\nIn remote sensing data for computer vision, dealing with scrambled satellite images poses a unique challenge. These images have parts out of order due to practical factors like data transmission errors or limitations in satellite imaging. Using pretrained CLIP directly on such jumbled images is difficult as the extracted features lack meaningful information. APPLeNet [63] has achieved significant performance gains over CLIP and other learning methods for domain generalization in remote sensing images. However, APPLeNet encounters difficulties when dealing with jumbled RS images. As shown in Figure 1, our demonstrations indicate that APPLeNet's performance decreases by approximately 0.2% -0.8% on average across three types of domain generalization tasks (Base-to-New, Cross Dataset, and Single Source Multi Target) when presented with jumbled RS images instead of non-jumbled ones. To address this challenge of handling jumbled RS data effectively, we propose a method called C-SAW. It leverages a contrastive selfsupervised learning framework for robust domain generalization. Through a context-aware self-supervision mechanism, we divide an image into smaller patches and rearrange them randomly. With SSL training involving reconstruction and self-supervised loss, our approach reconstructs the original image while learning robust contextual representations from the jigsaw inputs. These learned representations enable the model to efficiently perform diverse downstream tasks, including classification. In summary, our contributions are as follows: -We propose a novel self-supervised prompt learning technique called C-SAW for image generalization in remote sensing applications.\n-Our proposed method, C-SAW, tackles the limitations of part embeddings in CLIP by incorporating a reconstruction task to enhance the latent visual space of the distorted input image. Furthermore, we generate the visual attention tokens using G V A T before the frozen text encoder in CLIP to impose desired prompt constraints on the input visual embeddings.\n-We extensively tested our approach on five optical remote sensing (RS) image classification benchmarks, evaluating its performance on three challenging generalization tasks: cross-dataset, base-tonew class, and single-source multi-target. Our results demonstrate that C-SAW surpasses the relevant literature by a significant margin, achieving a mean classification score improvement of approximately 2-6%." }, { "figure_ref": [], "heading": "RELATED WORKS 2.1 Self-Supervised Learning in Remote Sensing", "publication_ref": [ "b63", "b64", "b83", "b68", "b12", "b37", "b47", "b84", "b1", "b41", "b67", "b29", "b73", "b15", "b10", "b66", "b76" ], "table_ref": [], "text": "Stojnic et al. in [64,65] apply split-brain autoencoders [84] and Contrastive Multiview Coding (CMC) [69] on aerial images to learn effective representations for classification tasks while SatMAE [13] uses a masked autoencoder modified for remote sensing data. Recent works in semantic segmentation include [38] with a global style-local matching contrastive learning approach and [48] for the Vaihingen dataset. FALSE [85] proposes efficient negative sampling, where as [2,42] addresses supervision noise and temporal alignment. Other contrastive approaches include [68], where they use SSL approach to obtain high performance pre-training and [30], where the authors make use of the semantic similarities among nearby scenes. [74] creates diverse samples through spatial and spectral transformations. GAN discriminators are used in [16] and [11] for temporal and multiview images, respectively. For detailed information, one can refer to [67,77]." }, { "figure_ref": [], "heading": "Domain Generalization", "publication_ref": [ "b19", "b34", "b3", "b17", "b43", "b38", "b42", "b48", "b5", "b60", "b79", "b89", "b90", "b7", "b27", "b35", "b50", "b39", "b54", "b55", "b75", "b77", "b78", "b85", "b39", "b54", "b55", "b65", "b69", "b70", "b75", "b78", "b85", "b51", "b86" ], "table_ref": [], "text": "One of the crucial tasks for deep learning models facing domain shift challenges between training and test distributions, which is mainly referred to as domain generalization task. This task encompasses mainly two variants: multi-source domain generalization (Multi-DG) and single-source domain generalization (Single-DG). In Multi-DG, researchers have explored meta learning approaches [20,35] and subsequent works have built upon this, incorporating it for regularizers, semantic consistency, and feature critic losses [4,18,44]. Adversarial training methods [39,43,49] and Domain augmentation techniques [6,61,80,90,91] have been adopted to align feature distributions across different domains and generate new domains through adversarial examples. Other methods address Multi-DG through domain-specific masks, gradient-based dropout, episodic training, and style bias reduction techniques [8,28,36,51]. Over the years, Single-DG has gained more attention as a practical and challenging problem. Domain expansion is a prevalent approach in this regard, [40,55,56,76,78,79,86] although different methods have also been proposed, including adversarial attacks with semantic restrictions, information bottleneck techniques, Wasserstein auto-encoders, contrastive learning, uncertainty estimation, and objective functions for domain expansion [40,55,56,66,70,71,76,79,86]. Despite these efforts, domain generalization for remote sensing (RS) image classification remains an area with limited attention to date [52,87]." }, { "figure_ref": [], "heading": "Vision-Language Models and Prompt Learning", "publication_ref": [ "b0", "b74", "b2", "b13", "b57", "b56", "b40", "b52", "b61", "b88", "b91", "b46", "b62", "b49", "b87", "b91", "b30", "b62" ], "table_ref": [], "text": "Large-scale Vision-Language Models (VLMs) fuse visual and textual inputs, enhancing performance in various computer vision tasks. Multimodal learning excels over unimodal approaches in feature learning, benefiting tasks like visual question answering (VQA) [1], image captioning [75], and image retrieval [3], which rely on joint visual-semantic supervision. VLMs typically employ pre-trained language models like BERT [14] and GPT [58] for textual encoding, while Convolutional Networks (ConvNets) or Vision Transformers process visual inputs. Notable VLMs include CLIP [57] and VisualBERT [41]. Prompt learning gained popularity in NLP [53] and visual recognition tasks, utilizing pre-trained language models like BERT to offer valuable information through textual prompts for downstream tasks. Automated prompt generation, explored in AutoPrompt [62], identifies tokens with significant gradient changes for prompt optimization. CoOp [89] fine-tunes CLIP for few-shot image classification, optimizing prompts, where as other methods, like ProGrad [92], generate prompts from visual features and optimize context tokens. Additionally, PDL [47] proposes optimizing multiple prompt sets.\nIn the domain generalization for remote sensing images, AP-PLeNet [63] introduces a vision-language prompting technique. SLIP [50] improves CLIP's performance by supplementing contrastive learning with a self-supervised learning objective in a multi-task setup. These methods showcase the growing interest in prompt learning to enhance language models' capabilities in visual recognition tasks.\nCoCoOp [88] focuses on learning text prompts conditioned to input image embeddings, facilitating more context-aware queries. ProGrad [92] employs prompt tuning through distillation, transferring knowledge from learned few-shot prompts to zero-shot prompts, which can be especially valuable for improving generalization. MaPLe [31] specializes in fine-tuning the CLIP model by aligning vision and text modalities, enhancing the model's overall performance. APPLeNet [63] leverages attention mechanisms on conditioned image embeddings to improve text-prompting, simultaneously minimizing redundancy between context vectors for more efficient text conditioning on images. In contrast, C-SAW introduces a novel approach, that showcases the effectiveness of CLIP over image part embeddings by incorporating a reconstruction task (SSL) to enhance the latent visual space of distorted input images. Additionally, it generates visual attention tokens to impose desired prompt constraints on the input visual embeddings." }, { "figure_ref": [], "heading": "PROBLEM DEFINITION & METHODOLOGY", "publication_ref": [], "table_ref": [], "text": "We define dataset D = {D 𝑆 ∪ D 𝑇 }, where D 𝑆 and D 𝑇 represent the source and target domains, respectively, with each domain containing input data X and corresponding labels Y. The probability distributions 𝑃 (D 𝑖 𝑆 ) can vary for each domain. During training, we utilize labels Y 𝑆 from the source domains D 𝑆 , while during testing, our focus shifts to labels Y 𝑇 from a distinct target test domain D 𝑇 . The probability distribution P (D 𝑇 ) differs from P (D 𝑖 𝑆 ) ∀ i. For base-to-new class generalization, there is no overlap between the label sets Y 𝑆 and Y 𝑇 . In single-source domain generalization, the label sets are identical, resulting in Y 𝑆 ∩ Y 𝑇 = Y 𝑆 ∪ Y 𝑇 . The cross-dataset domain generalization presents variable situations of overlapping or non-overlapping label sets, warranting further exploration. This analysis provides valuable insights into domain adaptation and the relationships between source and target domains in diverse generalization scenarios." }, { "figure_ref": [ "fig_1" ], "heading": "The Proposed C-SAW:", "publication_ref": [], "table_ref": [], "text": "In our paper, we introduce C-SAW, a novel approach that enhances text token learning and leverages the effectiveness of CLIP's visual backbone with jumbled input images (𝑥 ′ ) as presented in Figure 2. The pre-trained CLIP's vision encoder 𝑓 𝑣 extracts features from both the original input image (𝑥) and the jumbled input image (𝑥 ′ ), enabling contrastive generalization of images with text embeddings from the pre-trained CLIP's text encoder 𝑓 𝑡 . To achieve domain-agnostic style features for domain generalization, we utilize the mean 𝜇 of a batch of features from the intermediate layers (IL) of CLIP's vision encoder. We observed that CLIP faces challenges in generalizing classification tasks when generating prompts from part-embeddings of 𝑥 ′ . To overcome this, our proposed C-SAW generates visual attentive tokens using G V A T . Additionally, we optimize C-SAW using the following losses; L 𝑐𝑒 for classification, L 𝑑𝑚 for contextualizing pre-trained CLIP's vision encoder for generalizing over part-embeddings of jumbled input images, and L 𝑠𝑠𝑙 for self-supervised learning to improve generalization. Furthermore, we use L 𝑟𝑒𝑐𝑜𝑛 to reconstruct the original image from the jumbled input image, which helps in strengthening the latent space of 𝑥 ′ . In the following subsections, we provide detailed explanations of our key contributions. We utilize these final image-conditioned prompt embeddings for the classification task under the supervision of the cross-entropy loss (L 𝑐𝑒 ). Additionally, the upsampling block U, represented in light pink, is associated with the reconstruction loss L 𝑟𝑒𝑐𝑜𝑛 , self-supervised loss (L 𝑠𝑠𝑙 ), and diversity maxmization loss (L 𝑑𝑚 ).\nBest viewed in colour.\nSelf-supervised Block: We propose an upsampling network (U) to upscale the jigsaw or jumbled visual features from the vision encoder 𝑓 𝑣 (𝑥 ′ ), aiming to reconstruct the original image 𝑥. Our upsampling network (U) consists of four convolutional (CNN) layers, where the output dimensions of the first three CNN layers are reduced by half of the input dimensions using a kernel size of 7, a stride of 3, padding of 1, and output padding of 2. The channel dimension of the last CNN layer is set to match the input channel dimension of the RGB image. Finally, we reshape the output of the last layer through bilinear interpolation to match the shape of the original images, i.e., U (𝑓\n𝑣 (𝑥 ′ )) = x ∈ R 3×224×224 .\nWe further strengthen the contextualizing ability of CLIP's vision encoder 𝑓 𝑣 part embeddings, using self-supervision between visual features 𝑓 𝑣 (𝑥 ′ ) and 𝑓 𝑣 (𝑥). We define the reconstruction and self-supervised losses, i.e., L 𝑟𝑒𝑐𝑜𝑛 and L 𝑠𝑠𝑙 , in the subsequent paragraphs. To generate the visual attentive tokens, we first create an attention mask 𝐴(𝑓 𝑙 𝑣 (𝑥 ′ )) = sign(G V A T (𝑓 𝑙 𝑣 (𝑥 ′ ))) by passing the obtained style features through a two-layer bottleneck structure, Linear-ReLU-Linear-Sigmoid. We then multiply and add this attention mask with the style features 𝑓 𝑙 𝑣 (𝑥 ′ ) in a residual manner to obtain the M visual attentive tokens 𝑣 S M , which is defined as" }, { "figure_ref": [], "heading": "Visual Attentive", "publication_ref": [], "table_ref": [], "text": "𝑣 S M (𝑥 ′ ) = [𝐴(𝑓 𝑙 𝑣 (𝑥 ′ )) ⊙ 𝑓 𝑙 𝑣 (𝑥 ′ )] + 𝑓 𝑙 𝑣 (𝑥 ′ )(1)\nThese visual attentive tokens 𝑣 S M (𝑥 ′ ) are combined with textual tokens 𝑐 M , i.e., 𝑐 ∼ M (𝑥 ′ ) = 𝑐 M +𝑣 S M (𝑥 ′ ). Finally, they pass through CLIP's pretrained text encoder 𝑓 𝑡 to obtain the prompt token embeddings 𝑝𝑟𝑜𝑚𝑝𝑡 𝑒𝑚𝑏 (𝑥 ′ ), mathematically defined as\n𝑝𝑟𝑜𝑚𝑝𝑡 𝑒𝑚𝑏 (𝑥 ′ ) = 𝑓 𝑡 (𝑐 ∼ M (𝑥 ′ )) ∈ R 𝐾 ×512(2)\nWhere 𝐾 denotes the number of classes. We follow the same process on the original input image 𝑥 to obtain 𝑝𝑟𝑜𝑚𝑝𝑡 𝑒𝑚𝑏 (𝑥). Finally, we compute the average of both prompt embeddings:\n𝐴𝑃𝐸 (𝑥 ′ , 𝑥) = 𝑝𝑟𝑜𝑚𝑝𝑡 𝑒𝑚𝑏 𝑎𝑣𝑔 (𝑥 ′ , 𝑥) = 𝑝𝑟𝑜𝑚𝑝𝑡 𝑒𝑚𝑏 (𝑥 ′ ) + 𝑝𝑟𝑜𝑚𝑝𝑡 𝑒𝑚𝑏 (𝑥) 2(3)\nwhich allowing us to perform the classification contrastively with the visual feature of 𝑥 ′ , i.e., 𝑓 𝑣 (𝑥 ′ )." }, { "figure_ref": [], "heading": "Losses and Network Optimization", "publication_ref": [ "b80" ], "table_ref": [], "text": "Cross-entropy Loss: We classify the jumbled images (𝑥 ′ ) of the original images (𝑥) in the contranstive manner with average text prompts generated with the respective conditioned input images i.e., 𝐴𝑃𝐸 (𝑥 ′ , 𝑥). The prediction probability for 𝑥 ′ to belong to the label 𝑦 is denoted by,\n𝑝 (𝑦|𝑥 ′ ) = exp(< 𝑓 𝑣 (𝑥 ′ ), 𝐴𝑃𝐸 (𝑥 ′ , 𝑥) > /𝜏) | Y | 𝑘=1 exp(< 𝑓 𝑣 (𝑥 ′ ), 𝐴𝑃𝐸 (𝑥 ′ , 𝑥) > /𝜏)(4)\nHere, < • > represents the cosine similarity, while 𝜏 stands for the temperature hyper-parameter. To compute the cross-entropy loss (L ce ), we compare the prediction probabilities of each input image with their corresponding class labels as follows:\nL 𝑐𝑒 = arg min G VAT E (𝑥 ′ ,𝑥,𝑦) ∈ P ( D 𝑠 ) - Y 𝑆 ∑︁ 𝑘=1 𝑦 𝑘 𝑙𝑜𝑔(𝑝 (𝑦 𝑘 |𝑥 ′ ))(5)\nSelf-supervised Loss: We integrate Barlow Twins self-supervision [81] to align features from 𝑓 𝑣 (𝑥 ′ ) and 𝑓 𝑣 (𝑥), vision encoders for jumbled and original images, name as L 𝑠𝑠𝑙 . This objective minimizes the discrepancy between cross-correlation and identity matrices, resulting in similar embeddings for distorted samples, removing redundancy. Our approach improves feature learning, leading to more coherent and efficient representations.\nReconstruction Loss: To achieve better peak signal-to-noise ratio between the jigsaw puzzle images (𝑥 ′ ) and reconstructed images ( x = U (𝑓 𝑣 (𝑥 ′ ))), we incorporate the 𝑙 2 -norm between which is defined as,\nL 𝑟𝑒𝑐𝑜𝑛 = argmin U E P ( D 𝑠 ) || x -𝑥 ′ || 2(6)\nDiversity Maximization Loss: Additionally, we enforce a limitation on the similarity distribution between the visual features 𝑓 𝑣 (𝑥 ′ ) of the target samples and prompt embeddings, by minimizing the entropy of prediction probabilities. It is defined as, L 𝑑𝑚 = arg min\n𝑝 (𝑦 |𝑥 ′ ) E P ( D 𝑠 ) 𝑚𝑖𝑛([𝑝 (𝑦 1 |𝑥 ′ ); • • • ; 𝑝 (𝑦 | Y | |𝑥 ′ )])(7)\nTotal Loss: We optimize our proposed C-SAW with total loss, L 𝑡𝑜𝑡𝑎𝑙 is computed as:\nL 𝑡𝑜𝑡𝑎𝑙 = arg min 𝑝 (𝑦 |𝑥 ′ ),U,G VAT [L 𝑐𝑒 + 𝛼 * (L 𝑠𝑠𝑙 + L 𝑟𝑒𝑐𝑜𝑛 ) + (1 -𝛼) * L 𝑑𝑚 ](8)\nwhere 𝛼 is a weight ratio factor associated for optimally balancing the ssl and diversity maximization losses." }, { "figure_ref": [ "fig_3" ], "heading": "EXPERIMENTAL RESULTS", "publication_ref": [ "b36", "b45", "b11", "b53", "b25", "b62", "b58", "b16", "b62", "b87", "b88" ], "table_ref": [], "text": "Datasets: For our experiments, we have used five remote sensing benchmark datasets i.e. PatternNet [37], RSICD [46], RESISC45 [12], MLRSNet [54] and EuroSat [26]. Additionally, we also work on generating learnable prompts within the single-source multi-target (SSMT) domain generalization setups, as mentioned in APPLeNet [63]. The mentioned curated dataset consists of 16 overlapping classes that are common across all four datasets and suits for domain generalization tasks.\nImplementation Details: We implemented our method in Py-Torch on a 12GB Nvidia RTX 3090-Ti GPU card. Our proposed C-SAW is trained for 50 epochs using stochastic gradient descent (SGD) optimizer [59]. The initial learning rate is set to 2𝑒 -4 with a warm-up fixed rate of 1𝑒 -7 to prevent explosive gradients. Input images are rescaled to (224 × 224) pixels and fed into CLIP's frozen encoder (ViT-B/16 [17]) for a latent dimension of R 512 . The model is trained with 16 samples per class and a batch size of 4. We experimentally choose the 𝛼 values to be in between [0.5, 0.7], shown in Figure 3.\nTo initialize text prompts, we use \"a photo of a [CLS]\" embeddings, following previous literature [63,88,89], resulting in a context length of four. Three different seeds are used for evaluation, and the average top-1 accuracy is reported. We experimentally select the parameters for optimizer, input preprocessing, and prompt initialization for optimal learning and convergence our proposed C-SAW." }, { "figure_ref": [], "heading": "Comparison", "publication_ref": [ "b56", "b72", "b21", "b88", "b87", "b22", "b91", "b30", "b62", "b4" ], "table_ref": [ "tab_2", "tab_2", "tab_3" ], "text": "In this section, we conduct a thorough comparison between our novel approach, C-SAW, and existing methods in the realm of deep learning. We assess their performance across three different domain generalization (DG) tasks: Base-to-Novel (B2N) Class Generalization: This task involves training and testing the model on separate sets of classes without any overlap between them. Cross-Dataset (CD) Generalization: Here, the model is trained on one dataset and then tested on new datasets that have variations in both their domains and labels. Single Source Multi-Target (SSMT) Domain Generalization: In this task, the model is trained on a specific source domain and then evaluated on multiple new domains, all within a closed-set scenario. To evaluate the effectiveness of our proposed C-SAW, we compare it against several established techniques, including zero-shot (ZS) CLIP [57], ERM [73], and DANN [22] for the SSMT task. Furthermore, we benchmark C-SAW against state-of-the-art (SOTA) prompt learning methods like CoOp [89], CoCoOp [88], CLIP-Adapter [23], ProGrad [92], MaPLe [31], AP-PLeNet [63] and StyLIP [5], which serve as baseline methods for all the generalization tasks under consideration.\nB2N class generalization: Table 1 presents the experimental results for Base-to-Novel (B2N) class generalization across five remote sensing (RS) datasets. The Table 1 includes the computation of the harmonic mean (HM), which represents the balance between the classification accuracies of the base and novel classes. For defining the source and target domains, we randomly and equally divide each dataset into two groups. We compare the performance of C-SAW with optimization-based methods that rely on referred context. Our proposed C-SAW method outperforms SOTA methods on the Pattern-Net, RSICD, RESISC45, MLRSNet and EuroSat datasets by margins of at least 3.5%, 1.3%, 2.0%, 3.1%, and 1.3% respectively, when considering the harmonic mean of base and novel classes. Among the preferred methods, MaPLe shows significant performances over the non-RS methods and holds the third-best performance. APPLeNet achieves the second-best performance in generalizing the unseen classes across all RS datasets. However, when compared to CLIP's zero-shot approach, C-SAW demonstrates superior generalization scores, with a substantial margin of 37.33% for seen classes and 10.18% for unseen classes averaged across all the remote sensing datasets.\nThe impressive performance gains of C-SAW in the B2N class generalization tasks highlight its effectiveness in adapting to novel classes in remote sensing datasets. The results reinforce C-SAW's capability to achieve robust domain generalization across diverse RS datasets and outperform existing optimization-based and zero-shot approaches. These findings position C-SAW as a competitive and reliable solution for class generalization in remote sensing domain adaptation tasks.\nCD generalization: In the cross-dataset setup, we have evaluated C-SAW on the PatternNet dataset (source domain), and zero-shot inference results were reported for the remaining remote sensing (RS) datasets (target domains), as shown in Table 2. Our method demonstrates remarkable improvements in both source and target classification performance. It surpasses zero-shot CLIP by substantial margins of 27.2% in the source domain and 4.46% on average across the target domains. Moreover, C-SAW outperforms SOTA methods by at least 2.55% on average across the unseen target datasets. These results highlight the effectiveness of C-SAW in reducing the generalization gap between a single source and multiple target domains, even in the presence of domain and label shifts, within the CD generalization technique. The significant performance gains achieved by C-SAW in the CD setup underscore its ability to generalize effectively across diverse and unseen RS datasets. This demonstrates its potential for practical applications in remote sensing domain generalization tasks, where models must perform well on previously unseen target domains. The superior performance of C-SAW compared to zero-shot CLIP and other SOTA methods further solidifies its position as a promising approach for domain generalization and transfer learning in the remote sensing domain." }, { "figure_ref": [], "heading": "SSMT domain generalization:", "publication_ref": [ "b62" ], "table_ref": [ "tab_4" ], "text": "In the Single-Source Multi-Target (SSMT) domain generalization setup, which differs from the previously discussed cross-dataset transfer (CDT) setting with shared classes across domains, we select the PatternNetv2 dataset as the target domain, following the approach of APPLeNet [63]. The results presented in Table 3 demonstrate the remarkable performance of C-SAW. Our proposed method has outperformed all the SOTA methods, showcasing a substantial lead of at least 4.67% on RSICDv2, 5.35% on RESISC45v2, and 4.23% on the MLRSNetv2 datasets. Overall, C-SAW exhibited notably superior performance, achieving a 4.94% improvement over the average performance across the target domains. The performance superiority of C-SAW in the SSMT setup further validates its effectiveness in addressing the domain generalization challenges in remote sensing images. These results underscore the potential of our approach in real-world applications, where adapting to diverse and unseen target domains is crucial. The robust performance gains achieved by C-SAW across multiple target domains highlight its promise as a practical and effective solution for remote sensing domain generalization tasks." }, { "figure_ref": [ "fig_3" ], "heading": "Ablation Studies", "publication_ref": [ "b16", "b30", "b87", "b91", "b24", "b6" ], "table_ref": [ "tab_5", "tab_6", "tab_7", "tab_8" ], "text": "Ablation study on weight ratio factor (𝛼): In this section, we have investigated the impact of the weight ratio factor (𝛼) on the performance of our model, C-SAW. 𝛼 determines the balance between self-supervision and supervision tasks that C-SAW should carry to optimize its performance, as defined by Eq. 8, where 𝛼 and (1 -𝛼) represent the ratios in which the two tasks are involved, respectively. Figure 3 illustrates the results of varying the weight ratio factor 𝛼. We have observed that the best performance is achieved when 𝛼 = 0.5, particularly in the B2N and SSMT setups. For the CD setup, the optimal weight ratio factor is found to be 𝛼 = 0.7. These findings indicate that striking an appropriate balance between self-supervision and supervision tasks is critical to maximizing the performance of C-SAW across different experimental setups. By fine-tuning the weight ratio factor, we ensure that C-SAW effectively leverages both forms of training signals, leading to superior performance in domain generalization tasks.\nEvaluation of C-SAW's sensitivity to the number of shots: The performance of our proposed C-SAW is assessed by varying the number of shots from 1 to all for the B2N class generalization task. A comparison is made against SOTA prompting techniques, as presented in Table 4. For this evaluation, we utilize a context length of 4, position the class token at the end, employ ViT-B/16 [17] as the visual feature backbone, and utilize a unified context vector. Since CLIP operates in a zero-shot manner, it is excluded from this comparison, and we focus solely on few-shot-based prompting methods while showcasing results on the average of all RS datasets. Our results demonstrate that C-SAW surpasses the performance of benchmark prompt learning-based methods by at least 1.9%, 2.2%, 2.3%, 2.4% and 2.1% for 1, 4, 8, 16 shots and all images, respectively.\nAblation study on loss terms: We have performed several experiments with our proposed model, C-SAW, employing different loss terms, as outlined in Table 5. One of these loss terms, 𝑆𝑆𝐿 defines the self-supervision importance of the model. It combines the self-supervised loss (L 𝑠𝑠𝑙 ) and reconstruction loss (L 𝑟𝑒𝑐𝑜𝑛 ). They are commonly used to minimize the gap between self-supervised views and the original images. By taking only these losses with L 𝑐𝑒 , decreases the model performance by 2.73%, 0.56% and 5.18% in B2N, CD and SSMT set up respectively. For that reason, we have considered a 𝑁𝑜𝑛-𝑆𝑆𝐿 loss L 𝑑𝑚 . However, considering L 𝑑𝑚 only and removing 𝑆𝑆𝐿 losses lead to a performance decrease of 3.69%, 3.33% and 5.60% in B2N, CD and SSMT set up respectively. From Eq. 8, here we have considered when 𝛼 = 1.0, only 𝑆𝑆𝐿 loss works, when 𝛼 = 0, only L 𝑑𝑚 contributes and when 𝛼 = 0.5, the best output comes for B2N and SSMT DG tasks as discussed above in details on the importance of 𝛼 factor. Ablation study on context lengths: The context length four \"a photo is and experimentally found to be optimal in SOTA prompt-learning methods [31,88,92]. As shown in Table 6, we have experimented and found that the context length of 4 provides best performance in comparison with 𝑀 = 8, 12 and 16. Ablation study on feature extractors: Since CLIP is the most popular SOTA for few-shot or zero-shot tasks in computer vision, we use CLIP as the frozen network for feature extractor and perform prompt tuning for various downstream tasks. We also compare our proposed C-SAW with simple vision extractors like ResNet-50 (RN50) [25] and DINO [7] on SSMT DG task, and we can clearly observe that prompt learning conditioned to image features outperforms pre-trained feature extractors at least by 11% as mentioned in Table 7." }, { "figure_ref": [], "heading": "CONCLUSIONS", "publication_ref": [], "table_ref": [], "text": "We present C-SAW, a framework designed to enhance the multidomain generalization capability of CLIP-derived features by introducing two key improvements on top of the frozen CLIP model. Firstly, we propose a jigsaw-based self-supervised objective to supplement CLIP's vision encoder. This addition injects the importance of part-aware visual feature learning, effectively addressing a limitation present in the baseline CLIP model. Secondly, we introduce a novel prompt learning approach within C-SAW, strategically integrating visual content and style primitives into the prompts. This integration enables the model to achieve better generalization to previously unseen domains and classes. Through these innovative modifications, C-SAW demonstrates impressive performance in dealing with challenging optical remote sensing images, by achieving better generalization across diverse domains and classes. In the future, we aim to further enhance the model's capabilities by incorporating outlier identification ability, unlocking even more potential for anomaly detection and handling challenging scenarios." } ]
We focus on domain and class generalization problems in analyzing optical remote sensing images, using the large-scale pre-trained vision-language model (VLM), CLIP. While contrastively trained VLMs show impressive zero-shot generalization performance, their effectiveness is limited when dealing with diverse domains during training and testing. Existing prompt learning techniques overlook the importance of incorporating domain and content information into the prompts, which results in a drop in performance while dealing with such multi-domain data. To address these challenges, we propose a solution that ensures domain-invariant prompt learning while enhancing the expressiveness of visual features. We observe that CLIP's vision encoder struggles to identify contextual image information, particularly when image patches are jumbled up. This issue is especially severe in optical remote sensing images, where land-cover classes exhibit well-defined contextual appearances. To this end, we introduce C-SAW, a method that complements CLIP with a self-supervised loss in the visual space and a novel prompt learning technique that emphasizes both visual domain and contentspecific features. We keep the CLIP backbone frozen and introduce a small set of projectors for both the CLIP encoders to train C-SAW contrastively. Experimental results demonstrate the superiority of C-SAW across multiple remote sensing benchmarks and different generalization tasks.
C-SAW: Self-Supervised Prompt Learning for Image Generalization in Remote Sensing
[ { "figure_caption": "Figure 1 :1Figure 1: C-SAW shows better performance scores in comparison to APPLeNet [63] for jumbled RS images. Here, B2N, CD and SSMT represent base-to-new class generalization, cross-dataset generalization and single source multi target generalization, respectively.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure2: Our proposed C-SAW architecture utilizes 𝑓 𝑣 and 𝑓 𝑡 as the visual and text encoders, respectively, from CLIP's frozen backbone. The visual attentive token generator G 𝑉 𝐴𝑇 generates M visual attentive tokens 𝑣S using intermediate layers IL of the source domains S. These visual attentive tokens, along with context and class tokens, create text embeddings using 𝑓 𝑡 , forming the visual attentive text prompting (VATP) approach. We utilize these final image-conditioned prompt embeddings for the classification task under the supervision of the cross-entropy loss (L 𝑐𝑒 ). Additionally, the upsampling block U, represented in light pink, is associated with the reconstruction loss L 𝑟𝑒𝑐𝑜𝑛 , self-supervised loss (L 𝑠𝑠𝑙 ), and diversity maxmization loss (L 𝑑𝑚 ). Best viewed in colour.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Text Prompting (VATP): We derive the style representation for each domain by averaging the feature statistics 𝜇 of each batch 𝐵 from the respective domain. These statistics are obtained from the intermediate layers IL of the visual encoder of pre-trained CLIP. The style representation for the 𝑖 𝑡ℎ source domain is denoted as 𝜇 𝑠 𝑖 . The generative visual attentive token G V A T block takes the style information 𝜇 𝑠 to generate M attentive tokens 𝑣 S M conditioned on the input image 𝑥. Mathematically, this is denoted as 𝑣 S M = G V A T (𝜇 (𝑥 ′ )). The G V A T block consists of 𝑛 linear layers, each for extracting style features from 𝑛 intermediate layers 𝑓 𝑙 𝑣 , where 𝑙 ∈ {1, • • • , 𝑛} and 𝑓 𝑙 𝑣 ∈ R 𝑊 ×𝐻 ×𝐶 with 𝑊 , 𝐻 and 𝐶 denote height, width and channel, respectively.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Ablation of weight ratio factor (𝛼) in all generalization setup.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Comparing C-SAW with SOTA methods on base-to-new class generalization over existing methods on 5 different remote sensing datasets on 16-shots with context length, M=4. HM represents the harmonic mean.", "figure_data": "(a) Average over 5 datasets(b) PatternNet(c) RSICDMethodBaseNewHMMethodBaseNewHMMethodBaseNewHMCLIP [57]56.5060.0458.22CLIP [57]63.6764.3764.02CLIP [57]54.6155.3354.97CoOp [89]89.6557.3469.94CoOp [89]91.6262.2374.12CoOp [89]92.5256.0869.83CLIP-Adt [23]81.9459.0568.64CLIP-Adt [23]82.1563.2671.48CLIP-Adt [23]78.9355.4465.13CoCoOp [88]87.7159.0270.56CoCoOp [88]92.3963.3475.16CoCoOp [88]93.1858.6772.00ProGrad [92]87.5551.1864.60ProGrad [92]92.6562.4874.63ProGrad [92]93.4458.1571.69MaPLe [31]90.8563.1174.48MaPLe [31]94.7466.1277.88MaPLe [31]94.9160.5273.91APPLeNet [63]90.9563.7274.94APPLeNet [63]94.8965.5777.55APPLeNet [63]95.2660.7174.16StyLIP [5]91.2563.5174.90StyLIP [5]95.1366.7878.47StyLIP [5]94.9860.9274.23C-SAW92.9066.0377.20C-SAW96.0370.1881.09C-SAW95.1362.5675.48(d) RESISC45(e) MLRSNet(f) EuroSATMethodBaseNewHMMethodBaseNewHMMethodBaseNewHMCLIP [57]56.3255.3855.85CLIP [57]51.4351.9251.67CLIP [57]56.4864.0560.03CoOp [89]89.0455.7568.57CoOp [89]75.2153.6462.62CoOp [89]92.1957.7468.69CLIP-Adt [23]81.6756.2366.60CLIP-Adt [23]71.6453.1961.05CLIP-Adt [23]85.2861.0771.17CoCoOp [88]89.7857.1869.86CoCoOp [88]76.3252.7562.38CoCoOp [88]87.4960.0471.21ProGrad [92]90.1357.8970.50ProGrad [92]75.9652.2361.90ProGrad [92]87.0444.6759.04MaPLe [31]91.4560.8273.05MaPLe [31]79.0654.8564.59MaPLe [31]94.0773.2382.35APPLeNet [63]91.2460.4672.73APPLeNet [63]78.5356.4165.66APPLeNet [63]94.8175.4684.04StyLIP [5]90.8760.3472.52StyLIP [5]80.6555.4765.73StyLIP [5]94.6174.0683.08C-SAW92.5662.7074.76C-SAW85.4157.5268.74C-SAW95.3977.2085.33", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparing C-SAW with SOTA methods for crossdataset generalization with PatternNet dataset as the source domain and remaining RS datasets as the target domains.", "figure_data": "SourceTargetMethodPatternNetRSICDRESISC45MLRSNetEuroSATAverageCLIP [57]61.7243.2548.5645.1343.2445.65CoOp [89]85.2342.5349.3444.5046.5145.46CLIP-Adapter [23]74.2742.5749.0744.1744.7545.27CoCoOp [88]85.9543.6149.5344.7246.8245.95ProGrad [92]86.1441.2548.2644.1245.9744.54MaPLe [31]87.9245.2349.5646.3747.6347.05APPLeNet [63]88.1744.8750.9746.8349.5247.56StyLIP [5]88.0146.1249.8946.9449.7948.19C-SAW88.9250.4750.6049.2650.5450.11", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparing C-SAW with SOTA methods for singlesource multi-target domain generalization on the benchmark RS datasets.", "figure_data": "SourceTargetMethodPatternNetv2RSICDv2RESISC45v2MLRSNetv2AverageERM [73]73.6961.4061.5961.1361.37CLIP [57]78.0472.1575.4267.7871.78DANN [22]93.5675.4976.1870.5374.07CoOp [89]94.2576.5077.8770.9775.11CLIP-Adapter [23]92.3679.1779.7671.0476.66CoCoOp [88]94.4179.3380.4371.6777.14ProGrad [92]95.1877.4680.6572.2976.80MaPLe [31]96.5280.4583.3776.1579.99APPLeNet [63]96.6381.0382.2374.0379.10StyLIP [5]96.8580.6784.5675.6680.30C-SAW97.9185.7088.7280.3884.93", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparing C-SAW with the SOTA methods on varying the number of shots for the B2N class generalization (considered harmonic mean HM) task with average of all RS datasets.", "figure_data": "Method1-shot4-shot8-shot16-shotAllCoOp [89]65.3166.8567.5269.9467.68CLIP-Adapter [23]63.4064.5867.1168.6466.57CoCoOp [88]66.8268.3169.0370.5668.85ProGrad [92]59.3361.1663.4464.6060.86MaPLe[31]73.4274.2876.6377.4976.72APPLeNet[63]75.8276.2377.6978.4378.51C-SAW77.1477.8379.4280.3279.95", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation study of C-SAW with different settings of losses in all generalization setup over average of all RS datasets. Here, we define 𝑆𝑆𝐿 (L 𝑠𝑠𝑙 + L 𝑟𝑒𝑐𝑜𝑛 ).", "figure_data": "LossB2NCDSSMTL 𝑐𝑒72.9345.8276.39L 𝑐𝑒 + 𝑆𝑆𝐿77.5949.5579.46L 𝑐𝑒 + L 𝑑𝑚76.6346.7879.04L 𝑐𝑒 + 𝑆𝑆𝐿 + L 𝑑𝑚80.3250.1184.64", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Comparing C-SAW with the SOTA methods on varying context length for the B2N class generalization.", "figure_data": "Context Length (𝑀)481216C-SAW80.3279.5579.0277.39", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Comparing C-SAW with different feature extractors for single-source multi-target domain generalization on the benchmark RS datasets.", "figure_data": "SourceTargetMethodPatternNetv2RSICDv2RESISC45v2MLRSNetv2AverageRN50[25]65.1254.3052.7753.4553.51DINO[7]80.5574.1071.9275.3973.80C-SAW97.9185.7088.7280.3884.93", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" } ]
Avigyan Bhattacharya; Mainak Singha; Ankit Jha; Biplab Banerjee
[ { "authors": "Aishwarya Agrawal; Jiasen Lu; Stanislaw Antol; Margaret Mitchell; C Lawrence Zitnick; Dhruv Batra; Devi Parikh", "journal": "", "ref_id": "b0", "title": "VQA: Visual Question Answering", "year": "2016" }, { "authors": "Burak Kumar Ayush; Chenlin Uzkent; Kumar Meng; Marshall Tanmay; David Burke; Stefano Lobell; Ermon", "journal": "", "ref_id": "b1", "title": "Geography-aware self-supervised learning", "year": "2021" }, { "authors": "Artem Babenko; Anton Slesarev; Alexandr Chigorin; Victor Lempitsky", "journal": "Springer", "ref_id": "b2", "title": "Neural codes for image retrieval", "year": "2014" }, { "authors": "Yogesh Balaji; Swami Sankaranarayanan; Rama Chellappa", "journal": "Advances in neural information processing systems", "ref_id": "b3", "title": "Metareg: Towards domain generalization using meta-regularization", "year": "2018" }, { "authors": "Shirsha Bose; Enrico Fini; Ankit Jha; Mainak Singha; Biplab Banerjee; Elisa Ricci", "journal": "", "ref_id": "b4", "title": "StyLIP: Multi-Scale Style-Conditioned Prompt Learning for CLIPbased Domain Generalization", "year": "2023" }, { "authors": "Fabio Maria Carlucci; Paolo Russo; Tatiana Tommasi; Barbara Caputo", "journal": "IEEE", "ref_id": "b5", "title": "Hallucinating agnostic images to generalize across domains", "year": "2019" }, { "authors": "Mathilde Caron; Hugo Touvron; Ishan Misra; Hervé Jégou; Julien Mairal; Piotr Bojanowski; Armand Joulin", "journal": "", "ref_id": "b6", "title": "Emerging properties in self-supervised vision transformers", "year": "2021" }, { "authors": "Prithvijit Chattopadhyay; Yogesh Balaji; Judy Hoffman", "journal": "Springer", "ref_id": "b7", "title": "Learning to balance specificity and invariance for in and out of domain generalization", "year": "2020-08-23" }, { "authors": "Ting Chen; Simon Kornblith; Mohammad Norouzi; Geoffrey Hinton", "journal": "PMLR", "ref_id": "b8", "title": "A simple framework for contrastive learning of visual representations", "year": "2020" }, { "authors": "Xinlei Chen; Haoqi Fan; Ross Girshick; Kaiming He", "journal": "", "ref_id": "b9", "title": "Improved baselines with momentum contrastive learning", "year": "2020" }, { "authors": "Yuxing Chen; Lorenzo Bruzzone", "journal": "", "ref_id": "b10", "title": "Self-supervised change detection by fusing SAR and optical multi-temporal images", "year": "2021" }, { "authors": "Gong Cheng; Junwei Han; Xiaoqiang Lu", "journal": "Proc. IEEE", "ref_id": "b11", "title": "Remote sensing image scene classification: Benchmark and state of the art", "year": "2017" }, { "authors": "Yezhen Cong; Samar Khanna; Chenlin Meng; Patrick Liu; Erik Rozi; Yutong He; Marshall Burke; David Lobell; Stefano Ermon", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b12", "title": "Satmae: Pre-training transformers for temporal and multi-spectral satellite imagery", "year": "2022" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b13", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Jian Ding; Nan Xue; Gui-Song Xia; Dengxin Dai", "journal": "", "ref_id": "b14", "title": "Decoupling zero-shot semantic segmentation", "year": "2022" }, { "authors": "Huihui Dong; Wenping Ma; Yue Wu; Jun Zhang; Licheng Jiao", "journal": "Remote Sensing", "ref_id": "b15", "title": "Selfsupervised representation learning for remote sensing image change detection based on temporal prediction", "year": "2020" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly; Jakob Uszkoreit; Neil Houlsby", "journal": "", "ref_id": "b16", "title": "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale", "year": "2020" }, { "authors": "Qi Dou; Daniel Coelho De Castro; Konstantinos Kamnitsas; Ben Glocker", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b17", "title": "Domain generalization via model-agnostic learning of semantic features", "year": "2019" }, { "authors": "Hongchao Fang; Sicheng Wang; Meng Zhou; Jiayuan Ding; Pengtao Xie", "journal": "", "ref_id": "b18", "title": "Cert: Contrastive self-supervised learning for language understanding", "year": "2020" }, { "authors": "Chelsea Finn; Pieter Abbeel; Sergey Levine", "journal": "PMLR", "ref_id": "b19", "title": "Model-agnostic metalearning for fast adaptation of deep networks", "year": "2017" }, { "authors": "Yaroslav Ganin; Victor Lempitsky", "journal": "PMLR", "ref_id": "b20", "title": "Unsupervised domain adaptation by backpropagation", "year": "2015" }, { "authors": "Yaroslav Ganin; Evgeniya Ustinova; Hana Ajakan; Pascal Germain; Hugo Larochelle; François Laviolette; Mario Marchand; Victor Lempitsky", "journal": "The journal of machine learning research", "ref_id": "b21", "title": "Domain-adversarial training of neural networks", "year": "2016" }, { "authors": "Peng Gao; Shijie Geng; Renrui Zhang; Teli Ma; Rongyao Fang; Yongfeng Zhang; Hongsheng Li; Yu Qiao", "journal": "", "ref_id": "b22", "title": "Clip-adapter: Better vision-language models with feature adapters", "year": "2021" }, { "authors": "Xiuye Gu; Tsung-Yi Lin; Weicheng Kuo; Yin Cui", "journal": "", "ref_id": "b23", "title": "Open-vocabulary object detection via vision and language knowledge distillation", "year": "2021" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b24", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Patrick Helber; Benjamin Bischke; Andreas Dengel; Damian Borth", "journal": "IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing", "ref_id": "b25", "title": "Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification", "year": "2019" }, { "authors": "Miyuki Hino; Elinor Benami; Nina Brooks", "journal": "Nature Sustainability", "ref_id": "b26", "title": "Machine learning for environmental monitoring", "year": "2018" }, { "authors": "Zeyi Huang; Haohan Wang; Eric P Xing; Dong Huang", "journal": "Springer", "ref_id": "b27", "title": "Self-challenging improves cross-domain generalization", "year": "2020-08-23" }, { "authors": "Chao Jia; Yinfei Yang; Ye Xia; Yi-Ting Chen; Zarana Parekh; Hieu Pham; Quoc Le; Yun-Hsuan Sung; Zhen Li; Tom Duerig", "journal": "PMLR", "ref_id": "b28", "title": "Scaling up visual and visionlanguage representation learning with noisy text supervision", "year": "2021" }, { "authors": "Jian Kang; Ruben Fernandez-Beltran; Puhong Duan; Sicong Liu; Antonio J Plaza", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b29", "title": "Deep unsupervised embedding for remotely sensed images based on spatially augmented momentum contrast", "year": "2020" }, { "authors": "Muhammad Uzair Khattak; Hanoona Rasheed; Muhammad Maaz; Salman Khan; Fahad Shahbaz Khan", "journal": "", "ref_id": "b30", "title": "MaPLe: Multi-Modal Prompt Learning", "year": "2023" }, { "authors": "Alexander Kolesnikov; Xiaohua Zhai; Lucas Beyer", "journal": "", "ref_id": "b31", "title": "Revisiting selfsupervised visual representation learning", "year": "1920" }, { "authors": "Alex Krizhevsky; Ilya Sutskever; Geoffrey E Hinton", "journal": "Advances in neural information processing systems", "ref_id": "b32", "title": "Imagenet classification with deep convolutional neural networks", "year": "2012" }, { "authors": "Zhenzhong Lan; Mingda Chen; Sebastian Goodman; Kevin Gimpel; Piyush Sharma; Radu Soricut", "journal": "", "ref_id": "b33", "title": "Albert: A lite bert for self-supervised learning of language representations", "year": "2019" }, { "authors": "Da Li; Yongxin Yang; Yi-Zhe Song; Timothy Hospedales", "journal": "", "ref_id": "b34", "title": "Learning to generalize: Meta-learning for domain generalization", "year": "2018" }, { "authors": "Da Li; Jianshu Zhang; Yongxin Yang; Cong Liu; Yi-Zhe Song; Timothy M Hospedales", "journal": "", "ref_id": "b35", "title": "Episodic training for domain generalization", "year": "2019" }, { "authors": "Hongzhi Li; Joseph G Ellis; Lei Zhang; Shih-Fu Chang", "journal": "", "ref_id": "b36", "title": "Patternnet: Visual pattern mining with deep neural network", "year": "2018" }, { "authors": "Haifeng Li; Yi Li; Guo Zhang; Ruoyun Liu; Haozhe Huang; Qing Zhu; Chao Tao", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b37", "title": "Global and local contrastive self-supervised learning for semantic segmentation of HR remote sensing images", "year": "2022" }, { "authors": "Haoliang Li; Sinno Jialin Pan; Shiqi Wang; Alex C Kot", "journal": "", "ref_id": "b38", "title": "Domain generalization with adversarial feature learning", "year": "2018" }, { "authors": "Lei Li; Ke Gao; Juan Cao; Ziyao Huang; Yepeng Weng; Xiaoyue Mi; Zhengze Yu; Xiaoya Li; Boyang Xia", "journal": "", "ref_id": "b39", "title": "Progressive domain expansion network for single domain generalization", "year": "2021" }, { "authors": "Liunian Harold; Li ; Mark Yatskar; Cho-Jui Da Yin; Kai-Wei Hsieh; Chang", "journal": "", "ref_id": "b40", "title": "Visualbert: A simple and performant baseline for vision and language", "year": "2019" }, { "authors": "Wenyuan Li; Keyan Chen; Hao Chen; Zhenwei Shi", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b41", "title": "Geographical knowledge-driven representation learning for remote sensing images", "year": "2021" }, { "authors": "Ya Li; Xinmei Tian; Mingming Gong; Yajing Liu; Tongliang Liu; Kun Zhang; Dacheng Tao", "journal": "", "ref_id": "b42", "title": "Deep domain generalization via conditional invariant adversarial networks", "year": "2018" }, { "authors": "Yiying Li; Yongxin Yang; Wei Zhou; Timothy Hospedales", "journal": "PMLR", "ref_id": "b43", "title": "Feature-critic networks for heterogeneous domain generalization", "year": "2019" }, { "authors": "Jiasen Lu; Dhruv Batra; Devi Parikh; Stefan Lee", "journal": "Advances in neural information processing systems", "ref_id": "b44", "title": "Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks", "year": "2019" }, { "authors": "Xiaoqiang Lu; Binqiang Wang; Xiangtao Zheng; Xuelong Li", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b45", "title": "Exploring models and data for remote sensing image caption generation", "year": "2017" }, { "authors": "Yuning Lu; Jianzhuang Liu; Yonggang Zhang; Yajing Liu; Xinmei Tian", "journal": "", "ref_id": "b46", "title": "Prompt distribution learning", "year": "2022" }, { "authors": "Simone Valerio Marsocci; Nikos Scardapane; Komodakis", "journal": "Remote Sensing", "ref_id": "b47", "title": "MARE: Self-supervised multi-attention REsu-Net for semantic segmentation in remote sensing", "year": "2021" }, { "authors": "Toshihiko Matsuura; Tatsuya Harada", "journal": "", "ref_id": "b48", "title": "Domain generalization using a mixture of multiple latent domains", "year": "2020" }, { "authors": "Norman Mu; Alexander Kirillov; David Wagner; Saining Xie", "journal": "Springer", "ref_id": "b49", "title": "Slip: Self-supervision meets language-image pre-training", "year": "2022-10-23" }, { "authors": "Hyeonseob Nam; Hyunjae Lee; Jongchan Park; Wonjun Yoon; Donggeun Yoo", "journal": "", "ref_id": "b50", "title": "Reducing domain gap by reducing style bias", "year": "2021" }, { "authors": "Claudio Persello; Lorenzo Bruzzone", "journal": "IEEE", "ref_id": "b51", "title": "Relevant and invariant feature selection of hyperspectral images for domain generalization", "year": "2014" }, { "authors": "Fabio Petroni; Tim Rocktäschel; Patrick Lewis; Anton Bakhtin; Yuxiang Wu; Alexander H Miller; Sebastian Riedel", "journal": "", "ref_id": "b52", "title": "Language models as knowledge bases?", "year": "2019" }, { "authors": "Xiaoman Qi; Panpan Zhu; Yuebin Wang; Liqiang Zhang; Junhuan Peng; Mengfan Wu; Jialong Chen; Xudong Zhao; Ning Zang; P Takis; Mathiopoulos ", "journal": "ISPRS Journal of Photogrammetry and Remote Sensing", "ref_id": "b53", "title": "MLRSNet: A multi-label high spatial resolution remote sensing dataset for semantic scene understanding", "year": "2020" }, { "authors": "Fengchun Qiao; Xi Peng", "journal": "", "ref_id": "b54", "title": "Uncertainty-guided model generalization to unseen domains", "year": "2021" }, { "authors": "Fengchun Qiao; Long Zhao; Xi Peng", "journal": "", "ref_id": "b55", "title": "Learning to learn single domain generalization", "year": "2020" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b56", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Alec Radford; Karthik Narasimhan; Tim Salimans; Ilya Sutskever", "journal": "", "ref_id": "b57", "title": "Improving language understanding by generative pre-training", "year": "2018" }, { "authors": "Herbert Robbins; Sutton Monro", "journal": "The annals of mathematical statistics", "ref_id": "b58", "title": "A stochastic approximation method", "year": "1951" }, { "authors": "F Floyd; Sabins", "journal": "Ore geology reviews", "ref_id": "b59", "title": "Remote sensing for mineral exploration", "year": "1999" }, { "authors": "Shiv Shankar; Vihari Piratla; Soumen Chakrabarti; Siddhartha Chaudhuri; Preethi Jyothi; Sunita Sarawagi", "journal": "", "ref_id": "b60", "title": "Generalizing across domains via cross-gradient training", "year": "2018" }, { "authors": "Taylor Shin; Yasaman Razeghi; Robert L Logan; I V ; Eric Wallace; Sameer Singh", "journal": "", "ref_id": "b61", "title": "Autoprompt: Eliciting knowledge from language models with automatically generated prompts", "year": "2020" }, { "authors": "Mainak Singha; Ankit Jha; Bhupendra Solanki; Shirsha Bose; Biplab Banerjee", "journal": "", "ref_id": "b62", "title": "APPLeNet: Visual Attention Parameterized Prompt Learning for Few-Shot Remote Sensing Image Generalization Using CLIP", "year": "2023" }, { "authors": "Vladan Stojnić; Vladimir Risojević", "journal": "", "ref_id": "b63", "title": "Evaluation of split-brain autoencoders for high-resolution remote sensing scene classification", "year": "2018" }, { "authors": "Vladan Stojnic; Vladimir Risojevic", "journal": "", "ref_id": "b64", "title": "Self-supervised learning of remote sensing scene representations using contrastive multiview coding", "year": "2021" }, { "authors": "Christian Szegedy; Wojciech Zaremba; Ilya Sutskever; Joan Bruna; Dumitru Erhan; Ian Goodfellow; Rob Fergus", "journal": "", "ref_id": "b65", "title": "Intriguing properties of neural networks", "year": "2013" }, { "authors": "Chao Tao; Ji Qi; Mingning Guo; Qing Zhu; Haifeng Li", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b66", "title": "Self-supervised remote sensing feature learning: Learning paradigms, challenges, and future works", "year": "2023" }, { "authors": "Chao Tao; Ji Qi; Weipeng Lu; Hao Wang; Haifeng Li", "journal": "IEEE Geoscience and Remote Sensing Letters", "ref_id": "b67", "title": "Remote sensing image scene classification with self-supervised paradigm under limited labeled samples", "year": "2020" }, { "authors": "Yonglong Tian; Dilip Krishnan; Phillip Isola", "journal": "Springer", "ref_id": "b68", "title": "Contrastive multiview coding", "year": "2020-08-23" }, { "authors": "Naftali Tishby; Fernando C Pereira; William Bialek", "journal": "", "ref_id": "b69", "title": "The information bottleneck method", "year": "2000" }, { "authors": "Ilya Tolstikhin; Olivier Bousquet; Sylvain Gelly; Bernhard Schoelkopf", "journal": "", "ref_id": "b70", "title": "Wasserstein auto-encoders", "year": "2017" }, { "authors": "Devis Tuia; Claudio Persello; Lorenzo Bruzzone", "journal": "IEEE geoscience and remote sensing magazine", "ref_id": "b71", "title": "Domain adaptation for the classification of remote sensing data: An overview of recent advances", "year": "2016" }, { "authors": "Vladimir N Vapnik", "journal": "Wiley-Interscience", "ref_id": "b72", "title": "Statistical Learning Theory", "year": "1998" }, { "authors": "Stefano Vincenzi; Angelo Porrello; Pietro Buzzega; Marco Cipriano; Pietro Fronte; Roberto Cuccu; Carla Ippoliti; Annamaria Conte; Simone Calderara", "journal": "IEEE", "ref_id": "b73", "title": "The color out of space: learning self-supervised representations for earth observation imagery", "year": "2021" }, { "authors": "Oriol Vinyals; Alexander Toshev; Samy Bengio; Dumitru Erhan", "journal": "", "ref_id": "b74", "title": "Show and tell: A neural image caption generator", "year": "2015" }, { "authors": "Riccardo Volpi; Hongseok Namkoong; Ozan Sener; John C Duchi; Vittorio Murino; Silvio Savarese", "journal": "Advances in neural information processing systems", "ref_id": "b75", "title": "Generalizing to unseen domains via adversarial data augmentation", "year": "2018" }, { "authors": "Yi Wang; Conrad M Albrecht; Nassim Ait; Ali Braham; Lichao Mou; Xiao Xiang Zhu", "journal": "", "ref_id": "b76", "title": "Self-supervised learning in remote sensing: A review", "year": "2022" }, { "authors": "Zijian Wang; Yadan Luo; Ruihong Qiu; Zi Huang; Mahsa Baktashmotlagh", "journal": "", "ref_id": "b77", "title": "Learning to diversify for single domain generalization", "year": "2021" }, { "authors": "Qinwei Xu; Ruipeng Zhang; Yi-Yan Wu; Ya Zhang; Ning Liu; Yanfeng Wang", "journal": "", "ref_id": "b78", "title": "SimDE: A Simple Domain Expansion Approach for Single-Source Domain Generalization", "year": "2023" }, { "authors": "Qinwei Xu; Ruipeng Zhang; Ya Zhang; Yanfeng Wang; Qi Tian", "journal": "", "ref_id": "b79", "title": "A fourier-based framework for domain generalization", "year": "2021" }, { "authors": "Jure Zbontar; Li Jing; Ishan Misra; Yann Lecun; Stéphane Deny", "journal": "PMLR", "ref_id": "b80", "title": "Barlow twins: Self-supervised learning via redundancy reduction", "year": "2021" }, { "authors": "Xiaohua Zhai; Avital Oliver; Alexander Kolesnikov; Lucas Beyer", "journal": "", "ref_id": "b81", "title": "S4l: Self-supervised semi-supervised learning", "year": "2019" }, { "authors": "Renrui Zhang; Rongyao Fang; Wei Zhang; Peng Gao; Kunchang Li; Jifeng Dai; Yu Qiao; Hongsheng Li", "journal": "", "ref_id": "b82", "title": "Tip-adapter: Training-free clip-adapter for better vision-language modeling", "year": "2021" }, { "authors": "Richard Zhang; Phillip Isola; Alexei A Efros", "journal": "", "ref_id": "b83", "title": "Split-brain autoencoders: Unsupervised learning by cross-channel prediction", "year": "2017" }, { "authors": "Zhaoyang Zhang; Xuying Wang; Xiaoming Mei; Chao Tao; Haifeng Li", "journal": "IEEE Geoscience and Remote Sensing Letters", "ref_id": "b84", "title": "FALSE: False Negative Samples Aware Contrastive Learning for Semantic Segmentation of High-Resolution Remote Sensing Image", "year": "2022" }, { "authors": "Long Zhao; Ting Liu; Xi Peng; Dimitris Metaxas", "journal": "Curran Associates, Inc", "ref_id": "b85", "title": "Maximum-Entropy Adversarial Data Augmentation for Improved Generalization and Robustness", "year": "2020" }, { "authors": "Juepeng Zheng; Wenzhao Wu; Shuai Yuan; Haohuan Fu; Weijia Li; Le Yu", "journal": "IEEE Geoscience and Remote Sensing Letters", "ref_id": "b86", "title": "Multisource-domain generalization-based oil palm tree detection using very-high-resolution (vhr) satellite images", "year": "2021" }, { "authors": "Kaiyang Zhou; Jingkang Yang; Chen Change Loy; Ziwei Liu", "journal": "", "ref_id": "b87", "title": "Conditional prompt learning for vision-language models", "year": "2022" }, { "authors": "Kaiyang Zhou; Jingkang Yang; Chen Change Loy; Ziwei Liu", "journal": "International Journal of Computer Vision", "ref_id": "b88", "title": "Learning to prompt for vision-language models", "year": "2022" }, { "authors": "Kaiyang Zhou; Yongxin Yang; Timothy Hospedales; Tao Xiang", "journal": "", "ref_id": "b89", "title": "Deep domain-adversarial image generation for domain generalisation", "year": "2020" }, { "authors": "Kaiyang Zhou; Yongxin Yang; Yu Qiao; Tao Xiang", "journal": "", "ref_id": "b90", "title": "Domain generalization with mixstyle", "year": "2021" }, { "authors": "Beier Zhu; Yulei Niu; Yucheng Han; Yue Wu; Hanwang Zhang", "journal": "", "ref_id": "b91", "title": "Promptaligned Gradient for Prompt Tuning", "year": "2022" } ]
[ { "formula_coordinates": [ 4, 144.58, 545.89, 90.32, 10.64 ], "formula_id": "formula_0", "formula_text": "𝑣 (𝑥 ′ )) = x ∈ R 3×224×224 ." }, { "formula_coordinates": [ 4, 362.37, 554.8, 196.37, 12.97 ], "formula_id": "formula_1", "formula_text": "𝑣 S M (𝑥 ′ ) = [𝐴(𝑓 𝑙 𝑣 (𝑥 ′ )) ⊙ 𝑓 𝑙 𝑣 (𝑥 ′ )] + 𝑓 𝑙 𝑣 (𝑥 ′ )(1)" }, { "formula_coordinates": [ 4, 364.86, 621.78, 193.88, 13.4 ], "formula_id": "formula_2", "formula_text": "𝑝𝑟𝑜𝑚𝑝𝑡 𝑒𝑚𝑏 (𝑥 ′ ) = 𝑓 𝑡 (𝑐 ∼ M (𝑥 ′ )) ∈ R 𝐾 ×512(2)" }, { "formula_coordinates": [ 4, 371.86, 675.52, 186.88, 35.8 ], "formula_id": "formula_3", "formula_text": "𝐴𝑃𝐸 (𝑥 ′ , 𝑥) = 𝑝𝑟𝑜𝑚𝑝𝑡 𝑒𝑚𝑏 𝑎𝑣𝑔 (𝑥 ′ , 𝑥) = 𝑝𝑟𝑜𝑚𝑝𝑡 𝑒𝑚𝑏 (𝑥 ′ ) + 𝑝𝑟𝑜𝑚𝑝𝑡 𝑒𝑚𝑏 (𝑥) 2(3)" }, { "formula_coordinates": [ 5, 85.74, 200.66, 208.84, 27.8 ], "formula_id": "formula_4", "formula_text": "𝑝 (𝑦|𝑥 ′ ) = exp(< 𝑓 𝑣 (𝑥 ′ ), 𝐴𝑃𝐸 (𝑥 ′ , 𝑥) > /𝜏) | Y | 𝑘=1 exp(< 𝑓 𝑣 (𝑥 ′ ), 𝐴𝑃𝐸 (𝑥 ′ , 𝑥) > /𝜏)(4)" }, { "formula_coordinates": [ 5, 80.5, 281.83, 214.08, 27.7 ], "formula_id": "formula_5", "formula_text": "L 𝑐𝑒 = arg min G VAT E (𝑥 ′ ,𝑥,𝑦) ∈ P ( D 𝑠 ) - Y 𝑆 ∑︁ 𝑘=1 𝑦 𝑘 𝑙𝑜𝑔(𝑝 (𝑦 𝑘 |𝑥 ′ ))(5)" }, { "formula_coordinates": [ 5, 113.29, 440.7, 181.3, 18.71 ], "formula_id": "formula_6", "formula_text": "L 𝑟𝑒𝑐𝑜𝑛 = argmin U E P ( D 𝑠 ) || x -𝑥 ′ || 2(6)" }, { "formula_coordinates": [ 5, 104.74, 508.97, 189.84, 18.73 ], "formula_id": "formula_7", "formula_text": "𝑝 (𝑦 |𝑥 ′ ) E P ( D 𝑠 ) 𝑚𝑖𝑛([𝑝 (𝑦 1 |𝑥 ′ ); • • • ; 𝑝 (𝑦 | Y | |𝑥 ′ )])(7)" }, { "formula_coordinates": [ 5, 79.66, 562.86, 214.92, 29.83 ], "formula_id": "formula_8", "formula_text": "L 𝑡𝑜𝑡𝑎𝑙 = arg min 𝑝 (𝑦 |𝑥 ′ ),U,G VAT [L 𝑐𝑒 + 𝛼 * (L 𝑠𝑠𝑙 + L 𝑟𝑒𝑐𝑜𝑛 ) + (1 -𝛼) * L 𝑑𝑚 ](8)" } ]
2024-03-03
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b8", "b14", "b15", "b16" ], "table_ref": [], "text": "Chronic wounds, a widespread issue affecting individuals of all ages, represent a silent epidemic. It was estimated in 2019 that the prevalence of chronic wounds of mixed etiologies was 2.21 per 1000 population [1]. Wound management is a major issue for bedridden patients in hospitals and elderly residents in aged care facilities. Wound management is challenging, and there is no standardized patient-centric care model. Wound documentation is crucial and should encompass a range of details such as location, size, surrounding skin condition, presence of undermining and tunneling, exudate, odor, or pain levels. Automated wound analysis by a computer system would allow accurate and precise diagnosis and assessment of the wound type, and enable quantitative assessment during healing, which could span months. Automated wound characterization offers a key advantage by allowing remote monitoring, eliminating the necessity for frequent and expensive physical examinations by medical specialists.\nWound assessment based on photography/videos is challenging because of substantial variations in appearance and quality caused by different camera quality, lighting, and camera pose. Data-driven vision-based technologies have been shown to improve wound assessment by enabling objective quantitative evidence for decision support [2]. Researchers have reported deep learning methods for 2D wound detection and classification [3], wound segmentation [4][5][6] or 2D wound image healing classification [7]. However, 2D wound measurement techniques do not report wound depth, potentially overlooking a crucial aspect of the wound healing process. Additional challenges include identifying wound margins, variations Fig. 1: Syn3DWound aims to produce high-quality synthetic data with precise control of the environment and acquisition protocol from a 2D real-world wound and a 3D avatar. It allows the generation of extensive datasets for evaluating segmentation models. Furthermore, the camera's intrinsic and extrinsic are saved to analyze the performance of 3D reconstruction methods. in the wound's appearance due to changes in patient position, and the natural curvature of body parts such as the heel, toe, and lower leg.\nAdvanced 3D imaging technology, coupled with automated analysis methods, enables standardized and comprehensive image acquisition [8]. It could provide natural representation and measurements, especially for attributes that may be challenging to identify in 2D images [8,9]. Automated wound analysis in 3D could assess the topology and textural features of wounds [10][11][12][13][14], offering valuable clinical information. A major bottleneck for training modern machine learning systems is obtaining high-quality training datasets and their associated ground truth (annotated by medical experts). Datasets that include 3D sensing are scarce, and collecting video of actual wounds is problematic: it has the potential to interfere with care, may include sensitive views, and can only be performed with limited camera and light setups. An alternative to collecting actual data is synthesizing images and their corresponding annotations, a strategy used in various domains, sometimes called digital twin [9,15]. Relevant to this paper, Dai et al. [16] generated textured burn wounds from a 3D human avatar as a synthetic annotated dataset. Sinha et al. [17] used similar methods to create 2D images from 3D textured meshes with diverse skin tones and background scenes. In contrast to existing methods, our proposed solution produces 2D synthetic data and precise 3D wound models, facilitating the evaluation of state-of-the-art 3D reconstruction methodologies (Fig. 1). This contribution is two-fold: Firstly, we introduce a 3D Wound synthetic dataset Syn3DWound, available for research purposes, with 2D and 3D ground truth. Secondly, we present baseline methods and evaluation protocols for i) 3D wound reconstruction, ii) 2D wound bed segmentation, and iii) 3D wound bed mapping, showcasing the merits of 3D wound analysis over 2D approaches." }, { "figure_ref": [ "fig_0", "fig_1" ], "heading": "SYN3DWOUND DATASET", "publication_ref": [ "b17", "b18", "b16", "b4", "b5", "b19", "b3", "b20", "b21" ], "table_ref": [ "tab_0" ], "text": "The synthetic views in Syn3DWound are generated using Blender, an open-source 3D computer graphic software, capable of producing realistic stills and videos by controlling the camera path. The user has the flexibility to manipulate wound characteristics, its location on the body, human body shape, and texture. The key steps are outlined in Fig. 2.\nThe inputs consist of a 3D human body avatar, a 2D wound image, and a predefined 3D wound shape and location. Users can manually carve a wound onto the 3D human body avatar surface, specifying its depth and location. The visual appearance of the wound, along with its segmentation mask, is integrated into the avatar's texture files.\nThe outputs include a 3D human body avatar featuring an attached wound, a collection of rendered images depicting various camera and environmental configurations, and all the necessary parameters for replicating the output. Beyond achieving pixelperfect segmentation masks and comprehensive data generation, Syn3DWound also provides precise 3D models of the wound, essential for assessing the effectiveness of 3D methodologies.\nWe employed The Rendered people dataset [18] and the 3D Body Text dataset [19], which offer high-definition textured meshes of the human body in high resolution. For the 3D rendering engine, Cycles1 was chosen for its enhanced light physics modelling and more lifelike rendering compared to routinely used real-time game graphics engines [17].\nAfter generating the 3D scene, users can create a camera path, allowing variations in the number of images for 3D reconstruction, as well as the ground truth for camera intrinsics and trajectory. For a particular wound, users can explore different observation angles, camera resolutions, and lens characteristics, as depicted in the first row of Fig. 3. To simulate imperfections present in real-world image acquisition, users can intentionally introduce either overexposure or apply motion/Gaussian blurring to the rendered images. Lighting aspects, such as the strength and the 3D placement of the light source, can also be adjusted at this stage, influencing the appearance of shadows in the rendered image. Ideally, wound characterization would include wound type, body location, size, variations in lighting conditions, and skin colour difference. Unfortunately, the availability of labelled data for 3D wound analysis has been limited. Existing datasets such as WoundSeg [5], DFUC2022 [6], FUSeg Challenge [20], AZH wound care [4], and Medetec [21] primarily consist of 2D annotated images. WoundDB [22] provides stereo images with the potential for depth estimation investigations. However, these images are not sequential, which limits their utility for 3D wound reconstruction. In contrast, Syn3DWound provides perfect information, albeit simulated. Table 1 compares Syn3DWound with these existing datasets." }, { "figure_ref": [ "fig_1" ], "heading": "EXPERIMENTS AND RESULTS", "publication_ref": [], "table_ref": [], "text": "In this section, we detail the evaluation protocol to perform 2D and 3D wound assessment of two 3D models, each representing a different ethnicity and depicted in Fig. 3. Upon the acceptance of our paper, we will release a more extensive dataset, along with the code required to compute the evaluation metrics." }, { "figure_ref": [ "fig_4" ], "heading": "Baseline systems and evaluation metrics", "publication_ref": [ "b11", "b12", "b22", "b23", "b24", "b25", "b23", "b26" ], "table_ref": [], "text": "3D wound reconstruction: A 3D reconstruction algorithm estimates the 3D geometry of an object from a collection of 2D images. The prevailing methods in the literature rely on standard projective geometry techniques such as structure-from-motion and multiview stereopsis [12,13,23]. However, new deep learning approaches for 3D scene rendering (e.g. Neural Radiance Fields (NeRF) [24]), are becoming very competitive. In this paper, we conduct a comparative analysis of two prominent open-source tools for 3D reconstruction: COLMAP [25] and Meshroom [26]. We also assess the performance of NeuS-Facto, a NeRF model tailored for surface extraction from the open-source SDFStudio toolbox [24].\nWe compared the 3D reconstructed meshes with the groundtruth synthetic mesh, after alignment using three steps: i) align the camera positions of the ground-truth data with those estimated by the frameworks (by solving a Procrustes problem [27]); ii) crop both meshes using the ground-truth 3D mask for wound bed segmentation, followed by fine alignment using the Iterative Closest Point (ICP) algorithm (applied only to the cropped meshes); iii) apply the transformations to the original meshes, followed by cropping the wound area again to report performance on the wound area only. In Table 2, we report the Average Symmetric Distance (ASD), Hausdorff Distance (HD90), and Normal Consistency (NC) metrics.\nThe proposed pipeline facilitates benchmarking of 3D reconstruction methods and investigation into the influence of image features in the performance of the reconstruction method. Fig. 5 shows the overall performance on the shoulder wound. COLMAP outperforms its competitor with increased image resolution. In every scenario, high-resolution images allow more fine-grained 3D reconstruction (see Fig. 6)." }, { "figure_ref": [], "heading": "2D wound segmentation:", "publication_ref": [ "b27", "b5", "b28", "b29", "b30", "b23" ], "table_ref": [], "text": "We trained a deep learning segmentation model SegFormer [28] on a dataset provided by DFUC2022 [6] and tested it on a set of images from Syn3DWound. From a predicted mask (A) and a ground truth mask (B), we compared the IoU score (Intersection over Union): |A∩B| |A∪B| , and the Dice score: 2|A∩B| |A|+|B| . 3D wound bed segmentation: We introduce a 3D wound segmentation technique that assigns binary labels to different regions of the reconstructed 3D models. We used a Meshroom-based texturing algorithm [29] to project a set of 2D wound segmentation masks onto 3D mesh vertices labeled as background and wound bed.\nFollowing the established standard [30], we report the Balanced Average Hausdorff distance (BAHD) [31], defined as BAHD(G, S) = 1 2|G| (H(G, S) + H(S, G)), where H is the directed average Hausdorff distance and |G| is the number of points in the ground truth wound segmentation. We also report recall R = (Tp)/(Tp + Fn) and precision P = Tp/(Tp + Fp), with Tp the number of vertices 0.161 0.397 0.953 NeuS-Facto (SDFStudio) [24] 0.166 0.404 0.960 Fig. 5: Evaluating 3D reconstruction outcomes across diverse image resolutions and quantities on the shoulder wound. We showcase results for the COLMAP pipeline using the ASD metric. However, similar trends are observed across different 3D reconstruction methodologies and evaluation metrics.\nfrom the 3D ground truth segmentation that are also in the 3D estimated segmentation, Fp the number of vertices in the predicted segmentation that are missing from the ground truth segmentation, and Fn is the number of the ground truth segmentation vertices missing from the predicted segmentation." }, { "figure_ref": [ "fig_1", "fig_5" ], "heading": "Results and discussion", "publication_ref": [ "b16", "b27", "b5" ], "table_ref": [ "tab_1", "tab_2" ], "text": "Influence of the quality of the images: While a recent study explores the use of synthetic images for dermatological assessments [17] with relatively small 512 × 512 images, we propose adopting Cycles, a powerful rendering engine that outperforms Open3D's physic-based renderer or Unity3D2 . Notably, our rendering method, though not real-time, produces superior results taking an average of 12.86(±0.73) seconds to generate a 4k synthetic image. 3Balancing Gender and Racial Diversity: In response to the emerging concern of the under-representation of minority groups in the 3D wound reconstruction: Quantitative results for 3D wound reconstruction in two selected samples are reported in Table 2. In our experiment, COLMAP demonstrates superior surface accuracy, while the performance of the Neural rendering-based method is nearly comparable.\n2D wound segmentation: Table 3, presents the performance of SegFormer [28] trained on DFUC2022 [6], tested on the synthetic images produced by Syn3DWound's model. The model, having been trained on real 2D wound data, exhibits promising performance when applied to our synthetic data, validating the quality of the Syn3DWound dataset. However, the limitations of 2D wound segmentations arise from the constrained perspective during capture, potentially impacting accuracy and comprehensiveness as they fail to fully represent the complexity of 3D structures (e.g., as shown in the second row of Fig. 3, only the middle panel of leg/shoulder represents a complete view of a wound without presenting details such as depth). Therefore, it is advisable to adopt methods that leverage rich 3D information through 3D segmentation. One way to achieve this is through projecting 2D masks onto 3D mesh vertices based on the results of the initial 2D segmentation.\n3D wound segmentation: Table 4 compares 3D wound segmentation results with ground truth using previously described metrics.\nNotably, for the second sample, incorporating a higher number of 2D segmentation maps enhances the performance of the resulting 3D segmentation. Fig. 7 (left) shows the reconstructed 3D wound segmentation of the shoulder wound, generated from 120 renderings, with color-coded true positive (light blue), false positives (blue) and false negatives (yellow). The 3D projection of 2D segmentations provides a more precise understanding of the geometric failure modes of 2D segmentation models." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we contribute a unique 3D wound dataset to encourage collaboration between computer vision and medical imaging communities, intending to advance 3D wound reconstruction and documentation. We perform a study on widely used 3D reconstruction and segmentation pipelines, generating a set of baseline results pivotal for a better understanding of 3D wound analysis to address limitations in traditional 2D wound documentation. " }, { "figure_ref": [], "heading": "COMPLIANCE WITH ETHICAL STANDARDS", "publication_ref": [], "table_ref": [], "text": "This study was performed in line with the principles of the Declaration of Helsinki. The experimental procedures involving human subjects described in this paper were approved by CSIRO Health and Medical Human Research Ethics Committee (CHMHREC). The CHMHREC is an NHMRC Registered Human Research Ethics Committee (EC00187). CSIRO Ethics ID 2022_025_LR" } ]
Wound management poses a significant challenge, particularly for bedridden patients and the elderly. Accurate diagnostic and healing monitoring can significantly benefit from modern image analysis, providing accurate and precise measurements of wounds. Despite several existing techniques, the shortage of expansive and diverse training datasets remains a significant obstacle to constructing machine learning-based frameworks. This paper introduces Syn3DWound, an open-source dataset of high-fidelity simulated wounds with 2D and 3D annotations. We propose baseline methods and a benchmarking framework for automated 3D morphometry analysis and 2D/3D wound segmentation.
SYN3DWOUND: A SYNTHETIC DATASET FOR 3D WOUND BED ANALYSIS
[ { "figure_caption": "Fig. 2 :2Fig. 2: Representations of the specific components involved in the synthetic wound generation. i) 3D human avatar, ii) Wound image and wound extraction, iii) Mesh sculpting including wound shape and placement in the human body, iv) View selection of the 3D human body avatar, and v) Rendering and postprocessing.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 :3Fig. 3: Representations of 2D wound images and their corresponding segmentation maps are generated from various camera trajectories of the 3D wound models. The two models featured in this manuscript are a leg wound (left) and a shoulder wound (right).", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig. 4: Overview of a traditional framework for 3D reconstruction and analysis: sequential image collection, feature extraction and matching, camera models and sparse point cloud, dense point cloud, meshing point cloud, and 3D reconstruction.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Table 2 :2Evaluating the performance of established 3D reconstruction pipelines by benchmarking the reconstruction of Sample 1 (leg wound), illustrated through a set of 300 2D images.Methodology/Tool↓ ASD (mm) ↓ HD90 (mm)", "figure_data": "", "figure_id": "fig_3", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 6 :6Fig. 6: Assessing the impact of the image resolution (1080p to 4k) on mesh quality from Meshroom reconstruction on Sample 2 (shoulder wound). An increased camera resolution allows the reconstruction of more detailed geometries", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 7 :7Fig. 7: Color-coded 3D segmentation metrics for the shoulder wound. Right: ground-truth (red) and predicted (green) contour.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "A comparison of the proposed 3D Wound bed dataset and the existing wound datasets. The Syn3DWound dataset consists of 3D wound models that could generate diverse 2D wound images from different views of the same target wound. ⋆ 188 RGB, 188 thermal, 184 stereo, and 177 depth images are included in the WoundsDB database.", "figure_data": "DatasetYearModalityDomain Total Images Wound EtiologySyn3DWound (our)2023RGB2D/3D20 models †Pressure, trauma, arterialWoundSeg [5]2023RGB2D2,686Diabetic, pressure, trauma, venous, surgical, arterial, cellulitis, and othersDFUC2022 [6]2022RGB2D4,000Foot ulcerWoundsDB [22]2021 RGB, Stereo, Thermal2D737 ⋆Venous ulcers, ischaemia, venous ulcersFUSeg Challenge [20] 2021RGB2D1,210Foot ulcerAZH wound care [4]2020RGB2D1,109Foot ulcerMedetec [21]NARGB2D160Foot ulcer†", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Evaluation of 2D segmentation model trained on DFUC2022[6] and tested on renderings of the leg wound 3D model.", "figure_data": "Encoder NetworkRenderings ↑ IOU↑ DiceMiT-B5SegFormer [28]3000.8880.940Mix Transformer encoders (MiT).", "figure_id": "tab_1", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Evaluation of 3D wound bed segmentation.", "figure_data": "Wound Sample↓ BAHD (mm)↑ P↑ RLeg Wound, 300 renderings0.0280.9250.985Shoulder Wound, 80 renderings0.6980.9270.970Shoulder Wound, 120 renderings0.1010.9570.971", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" } ]
Léo Lebrat; Rodrigo Santa Cruz; Remi Chierchia; Yulia Arzhaeva; Mohammad Ali Armin; Joshua Goldsmith; Jeremy Oorloff; Prithvi Reddy; Chuong Nguyen; Lars Petersson; Michelle Barakat-Johnson; Georgina Luscombe; Clinton Fookes; Olivier Salvado; David Ahmedt-Aristizabal
[ { "authors": "Laura Martinengo; Maja Olsson; Ram Bajpai; Michael Soljak; Zee Upton; Artur Schmidtchen; Josip Car; Krister Järbrink", "journal": "Annals of Epidemiology", "ref_id": "b0", "title": "Prevalence of chronic wounds in the general population: systematic review and meta-analysis of observational studies", "year": "2019" }, { "authors": "Geert Litjens; Thijs Kooi; Ehteshami Babak; Arnaud Bejnordi; Adiyoso Arindra; Francesco Setio; Mohsen Ciompi; Jeroen Ghafoorian; Awm Van Der Laak; Bram Van Ginneken; Clara I Sánchez", "journal": "Medical image analysis", "ref_id": "b1", "title": "A survey on deep learning in medical image analysis", "year": "2017" }, { "authors": "Ruyi Zhang; Dingcheng Tian; Dechao Xu; Wei Qian; Yudong Yao", "journal": "IEEE Access", "ref_id": "b2", "title": "A survey of wound image analysis using deep learning: classification, detection, and segmentation", "year": "2022" }, { "authors": "Chuanbo Wang; Victor Dm Anisuzzaman; Mrinal Williamson; Behrouz Kanti Dhar; Jeffrey Rostami; Sandeep Niezgoda; Zeyun Gopalakrishnan; Yu", "journal": "Scientific reports", "ref_id": "b3", "title": "Fully automatic wound segmentation with deep convolutional neural networks", "year": "2020" }, { "authors": "Subba Reddy Oota; Vijay Rowtula; Shahid Mohammed; Minghsun Liu; Manish Gupta", "journal": "", "ref_id": "b4", "title": "Wsnet: Towards an effective method for wound image segmentation", "year": "2023" }, { "authors": "Connah Kendrick; Bill Cassidy; Claire O' Joseph M Pappachan; Cornelious J Shea; Elias Fernandez; Koshy Chacko; Neil D Jacob; Moi Reeves; Yap Hoon", "journal": "", "ref_id": "b5", "title": "Translating clinical delineation of diabetic foot ulcers into machine interpretable segmentation", "year": "2022" }, { "authors": "Subba Reddy Oota; Vijay Rowtula; Shahid Mohammed; Jeffrey Galitz; Minghsun Liu; Manish Gupta", "journal": "", "ref_id": "b6", "title": "Healtech-a system for predicting patient hospitalization risk and wound progression in old patients", "year": "2021" }, { "authors": "David Ahmedt-Aristizabal; Chuong Nguyen; Lachlan Tychsen-Smith; Ashley Stacey; Shenghong Li; Joseph Pathikulangara; Lars Petersson; Dadong Wang", "journal": "Computer Methods and Programs in Biomedicine", "ref_id": "b7", "title": "Monitoring of pigmented skin lesions using 3d whole body imaging", "year": "2023" }, { "authors": "Zahra Mirikharaji; Kumar Abhishek; Alceu Bissoto; Catarina Barata; Sandra Avila; Eduardo Valle; M Emre Celebi; Ghassan Hamarneh", "journal": "Medical Image Analysis", "ref_id": "b8", "title": "A survey on deep learning for skin lesion segmentation", "year": "2023" }, { "authors": "Chunhui Liu; Xingyu Fan; Zhizhi Guo; Zhongjun Mo; Eric I Chao Chang; Yan Xu", "journal": "BMC Bioinformatics", "ref_id": "b9", "title": "Wound area measurement with 3D transformation and smartphone images", "year": "2019" }, { "authors": "Dominique Houman Mirzaalian Dastjerdi; Stefan J Töpfer; Andreas Rupitsch; Maier", "journal": "International Journal of Biomedical Imaging", "ref_id": "b10", "title": "Measuring surface area of skin lesions with 2D and 3D algorithms", "year": "2019" }, { "authors": "Tim Shirley; Dmitri Presnov; Andreas Kolb", "journal": "Journal of WSCG", "ref_id": "b11", "title": "A lightweight approach to 3D measurement of chronic wounds", "year": "2019" }, { "authors": "M C Fellipe; Bruno M Barbosa; Rafael B Carvalho; Gomes", "journal": "CBMS", "ref_id": "b12", "title": "Accurate chronic wound area measurement using structure from motion", "year": "2020-07" }, { "authors": "David Sánchez-Jiménez; Fernando F Buchón-Moragues; Begoña Escutia-Muñoz; Rafael Botella-Estrada", "journal": "International Wound Journal", "ref_id": "b13", "title": "SfM-3DULC: Reliability of a new 3D wound measurement procedure and its accuracy in projected area", "year": "2022" }, { "authors": "Pourya Shamsolmoali; Masoumeh Zareapoor; Eric Granger; Huiyu Zhou; Ruili Wang; M Emre Celebi; Jie Yang", "journal": "Information Fusion", "ref_id": "b14", "title": "Image synthesis with adversarial networks: A comprehensive survey and case studies", "year": "2021" }, { "authors": "Fei Dai; Dengyi Zhang; Kehua Su; Ning Xin", "journal": "Journal of Burn Care & Research", "ref_id": "b15", "title": "Burn images segmentation based on burn-gan", "year": "2021" }, { "authors": "Ashish Sinha; Jeremy Kawahara; Arezou Pakzad; Kumar Abhishek; Matthieu Ruthven; Enjie Ghorbel; Anis Kacem; Djamila Aouada; Ghassan Hamarneh", "journal": "", "ref_id": "b16", "title": "Dermsynth3d: Synthesis of in-the-wild annotated dermatology images", "year": "2023" }, { "authors": " Renderpeople", "journal": "", "ref_id": "b17", "title": "Bundle Swimwear Rigged 002", "year": "2020" }, { "authors": "Alexandre Saint; Eman Ahmed; Kseniya Cherenkova; Gleb Gusev; Djamila Aouada; Bjorn Ottersten", "journal": "IEEE", "ref_id": "b18", "title": "3dbodytex: Textured 3d body dataset", "year": "2018" }, { "authors": "Chuanbo Wang; Amirreza Mahbod; Isabella Ellinger; Adrian Galdran; Sandeep Gopalakrishnan; Jeffrey Niezgoda; Zeyun Yu", "journal": "", "ref_id": "b19", "title": "Fuseg: The foot ulcer segmentation challenge", "year": "2022" }, { "authors": "Steve Thomas", "journal": "", "ref_id": "b20", "title": "Medetec wound database", "year": "2020" }, { "authors": "Michał Kręcichwost; Joanna Czajkowska; Agata Wijata; Jan Juszczyk; Bartłomiej Pyciński; Marta Biesok; Marcin Rudzki; Jakub Majewski; Jacek Kostecki; Ewa Pietka", "journal": "Computerized Medical Imaging and Graphics", "ref_id": "b21", "title": "Chronic wounds multimodal image database", "year": "2021" }, { "authors": "Syamantak Kumar; Dhruv Jaglan; Nagarajan Ganapathy; Thomas M Deserno", "journal": "SPIE", "ref_id": "b22", "title": "A comparison of open source libraries ready for 3d reconstruction of wounds", "year": "2019" }, { "authors": "Zehao Yu; Anpei Chen; Bozidar Antic; Songyou Peng; Apratim Bhattacharyya; Michael Niemeyer; Siyu Tang; Torsten Sattler; Andreas Geiger", "journal": "", "ref_id": "b23", "title": "Sdfstudio: A unified framework for surface reconstruction", "year": "2022" }, { "authors": "Johannes Lutz; Schönberger ; Jan-Michael Frahm", "journal": "", "ref_id": "b24", "title": "COLMAP: A general-purpose Structure-from-Motion (SfM) and Multi-View Stereo (MVS) pipeline", "year": "" }, { "authors": " Alicevision", "journal": "", "ref_id": "b25", "title": "Meshroom: A 3D reconstruction software", "year": "2018" }, { "authors": "H Gene; Charles F Golub; Van Loan", "journal": "JHU press", "ref_id": "b26", "title": "Matrix computations", "year": "2013" }, { "authors": "Enze Xie; Wenhai Wang; Zhiding Yu; Anima Anandkumar; Jose M Alvarez; Ping Luo", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b27", "title": "Segformer: Simple and efficient design for semantic segmentation with transformers", "year": "2021" }, { "authors": "Carsten Griwodz; Simone Gasparini; Lilian Calvet; Pierre Gurdjos; Fabien Castan; Gregoire Benoit Maujean; Yann De Lillo; Lanthony", "journal": "", "ref_id": "b28", "title": "Alicevision meshroom: An open-source 3d reconstruction pipeline", "year": "2021" }, { "authors": "Abdel Aziz; Taha ; Allan Hanbury", "journal": "BMC Medical Imaging", "ref_id": "b29", "title": "Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool", "year": "2015" }, { "authors": "Orhun Utku Aydin; Abdel Aziz Taha; Adam Hilbert; Ahmed A Khalil; Ivana Galinovic; B Jochen; Dietmar Fiebach; Vince Istvan Frey; Madai", "journal": "European radiology experimental", "ref_id": "b30", "title": "On the usage of average hausdorff distance for segmentation performance assessment: hidden error when used for ranking", "year": "2021" } ]
[]
2024-03-25
[ { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "tifiers from the exemplar images. ADI first expands the semantic conditioning space by introducing layer-wise identifier tokens, thereby increasing the representational richness while distributing the inversion across different features. Then, to block the inversion of action-agnostic features, ADI extracts the gradient invariance from the constructed sample triples and masks the updates of irrelevant channels. To comprehensively evaluate the task, we present an Action-Bench that includes a variety of actions, each accompanied by meticulously selected samples. Both quantitative and qualitative results show that our ADI outperforms existing baselines in action-customized T2I generation. Our project page is at https://adi-t2i.github.io/ADI." }, { "figure_ref": [ "fig_1" ], "heading": "Introduction", "publication_ref": [ "b9", "b17", "b19", "b25", "b6", "b8", "b14", "b18", "b25", "b14", "b18", "b25", "b1", "b35", "b20", "b2", "b23", "b9", "b8", "b6" ], "table_ref": [], "text": "Thanks to the remarkable advances in text-to-image generation models [3,10,18,20], in particular the recent diffusion model [22,26], high-quality and diverse images can be synthesized under the control of text descriptions. However, it is difficult to provide precise descriptions of the desired actions, which are highly abstracted and summarized concepts. Therefore, relying solely on textual descriptions to generate actions tends to reduce fidelity to user requirements. Additionally, controllable generation methods [14,35] that rely on the conditioning of a skeleton or sketch image suffer from limited diversity and freedom, and they show difficulty generalizing to unseen subjects without retraining. In this paper, we study the action customization task, capturing the common action in the given images to generate new images with various new subjects.\nTo better understand the challenge of action customization, we start by examining existing subject-driven customization methods. Observations shown in Fig. 2 can be divided into two categories. Several methods including DreamBooth [25], Textual Inversion [5], and ReVersion [7], generate images that are unrelated to specific actions, suggesting that they fail to capture the representative characteristics of the actions. Since most of them are designed to invert appearance features with a pixel-level reconstruction loss, low-level details are emphasized during optimization while high-level action features are neglected. Benefiting from fine-tuning cross-attention or utilizing perlayer tokens, Custom Diffusion [9] and P+ [30] offer a larger semantic conditioning space for learning new concepts. Consequently, they are capable of encoding actionrelated knowledge such as \"raises one finger\" or \"raises both arms for cheering\" from exemplar images. However, they fail to decouple the focus from action-agnostic features, such as the appearance of the human body. These pieces of information are also encoded into the learned identifiers and \"contaminate\" the generation of animals during inference. As a result, the intended gorilla is replaced by a woman, and the tigers generated by the two methods exhibit human arms instead.\nTo avoid the appearance leakage while accurately modeling the target action, we propose Action-Disentangled Identifier (ADI) to learn the optimal action-specific identifiers. Firstly, we expand the semantic conditioning space by applying layer-wise identifier tokens. Since existing works have analyzed that different layers have varying degrees of control over low-level and high-level features [30], such an expansion increases the accommodation of various features, making it easier to invert action-related features. Furthermore, we would like to decouple the action-agnostic features from the learning of action identifiers. To achieve this, we discover invariant mechanisms in the data that are difficult to vary across examples. Specifically, given an exemplar image with the specific action, another same-action image can be randomly sampled from the training data, forming a context-different pair. Meanwhile, leveraging mature subject-driven customization techniques, an image that shares the similar context can be quickly synthesized to form an action-different pair. To decouple the highlycoupled features, we disentangle action-agnostic features at the gradient level, and construct two context gradient masks by comparing the difference on the gradients over the input pairs. By overwriting the merged gradient mask to the gradient of the anchor image, the update of action-agnostic channels on the identifiers is discarded.\nMoreover, as a pioneering effort in this direction, we also contribute to a new benchmark named ActionBench, which provides a testbed of unique actions with diverse images for the under-explored task. We conduct extensive experiments on ActionBench, and a quick glance at the performance of ADI is illustrated in Fig. 1, where users can freely combine the designated action identifiers with various unseen humans and even animals. In summary, the main contributions of our work are three-fold: • We propose a novel action customization task, which requires learning the desired action from limited data for future generation. While existing customization fo-cuses on reprinting appearances, we highlight this understudied but important problem. • We contribute the ActionBench, where a variety of unique actions with manually filtered images provide the evaluation conditions for the task. • We devise the Action-Disentangled Identifier (ADI) method, which successfully inverts action-related features into the learned identifiers that can be freely combined with various characters and animals to generate high-quality images. [15,19,22,26]. GLIDE [15] introduces text conditions into the diffusion process through the use of an unclassified guide. DALL-E 2 [19] employs a diffusion prior module and cascading diffusion decoders to generate high-resolution images based on the CLIP [17] text encoder. Imagen [26] focuses on language understanding by using a large T5 language model to better represent semantics. The latent diffusion model [22] improves computational efficiency by performing the diffusion process in lowdimension latent space with an autoencoder. Finally, Stable Diffusion (SD) [22] employs a cross-attention mechanism to inject textual conditions into the diffusion generation process, aligning with the provided textual input. However, it is difficult to provide precise action descriptions in text, since user intent and machine understanding are not aligned. Furthermore, experimental results in Fig. 4 show that some actions are difficult to generate correctly without re-training, e.g., \"performs a handstand\". Controllable Action Generation. The paper focuses on transferring the desired action from examplar images to unseen people, characters, and even animals for photorealistic image generation. Existing efforts take source images and pose information (e.g., skeletal images or body parsing) as conditions to control the generation. Previous controllable solutions based on GANs [12,13,36] and VAEs [21,33] suffer from training difficulties and poor generation results. Some subsequent works [24,32] introduce text conditions to guide the action generation, yet fail with open vocabulary due to the small size of the vocabulary pools. Thanks to the significant advances of T2I diffusion models, recent methods [10,14,29], in particular the popular ControlNet [35], add arbitrary conditions to improve the versatility and controllability. While gaining a tremendous amount of traction from the community, ControlNet refers to the provided skeleton image to generate the action, which reduces flexibility and diversity. In addition, the objective of designing a general framework with additional trainable modules makes it not well-targeted to animals. In this work, we investigate customization solutions for action generation. Subject-Driven Customization. Due to the demand for generating images with user-specified subjects, customization methods [5, 9, 25, 30] tailored to the appearance have been studied in the context of T2I generation. Specifically, DreamBooth [25] binds rare new words with specific subjects through fine-tuning the whole T2I generator. Textual Inversion [5] learns an extra identifier to represent the subject and adds the identifier as a new word to the dictionary of the text encoder. Custom Diffusion [9] only fine-tunes the key and value matrices of the cross-attention to represent new concepts. P+ [30] extends the textual-conditioning space with per-layer tokens to allow for greater disentangling and control. Despite the success achieved, the experimental results in Fig. 2 show their failure in action customization. A recent work ReVersion [7] makes progress in learning specific relations including some interactions from exemplar images. However, the design of the method, which specializes in learning spatial relations, makes it difficult to invert action information." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Action Customization Benchmark", "publication_ref": [ "b15" ], "table_ref": [], "text": "Given a set of exemplar images\nX = {x 1 , x 2 , • • • , x N },\nwe assume that all images contain the same action performed by different people. The action-agnostic descriptions associated to the exemplar images are also provided, which can be used as prompt templates during training. The objective of the action customization task is to extract the co-existing action and transfer it to the synthesis of action-specific images with different new subjects. In order to provide suitable conditions for systematic comparisons on this task, we present a new ActionBench, which consists of diverse actions accompanied by meticulously selected sample images. The benchmark can be used for both quantitative and qualitative comparisons. Action Categories. To determine the involved actions, we first request GPT-4 [16] to provide 50 candidate action categories, and then attempt to collect images for these candidates. Only actions that can collect sufficient high-quality images are preserved. We finally define eight unique actions, ranging from single-handed (e.g., \"raises one finger\") to full-body movements (e.g., \"performs a handstand\").\nExemplar Images and Prompts. For each action, we collect ten example images with corresponding textual descriptions, featuring different people. We manually remove action-related descriptions from the textual content to make them suitable as prompt templates. Evaluation Subjects. We provide a list containing 23 subjects, including generic humans (e.g., \"An old man\"), wellknown personalities (e.g., \"David Beckham\"), and animals (e.g., \"A panda\"). The latter two are guaranteed to be completely unseen, which tests the generalization of methods." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "We start with the technical background in Sec. 4.1. Then, we provide a comprehensive description of our proposed ADI in Sec. 4.2." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [ "b5", "b22" ], "table_ref": [], "text": "Our study is based on the Stable Diffusion (SD) [22] model, which is considered to be the public state-of-the-art text-toimage generator. Specifically, to operate the diffusion process [6] in a low-dimensional latent space, SD employs a hierarchical VAE that consists of an encoder E and a decoder D. The encoder E is tasked with encoding the given image x into latent features z, and the decoder D reconstructs the image x from the latent, i.e., x = D(z) = D(E(x)). To control the generation with the textual conditions, given the noisy latent z t , current time step t and text tokens y, a conditional U-Net [23] denoiser is trained to predict the noise ϵ added to the latent z:\nL = E z∼E(x),y,ϵ∼N (0,1),t ∥ϵ -ϵ θ (z t , t, y)∥ 2 2 , (1\n)\nwhere y is obtained by feeding the prompt into a CLIP [17] text encoder. During inference, the pre-trained SD first samples a latent z T from the standard normal distribution N (0, 1). Iteratively, z t-1 can be obtained by removing noise from z t conditioned on y. After the final denoising step, the latent z 0 is mapped to generate an image x with the decoder D." }, { "figure_ref": [ "fig_0" ], "heading": "Action-Disentangled Identifier (ADI)", "publication_ref": [], "table_ref": [], "text": "Given exemplar images that all contain a specific entity, existing subject-driven inversion methods [5, 9] learn to represent the entity as an identifier token v ∈ R d . And the learned v can then be employed in text prompts to produce diverse and novel images, where the entity can be generated with different contexts. In this paper, we continue the vein of capturing the common action in exemplar images by finding the optimal identifiers. An overview of our proposed ADI is illustrated in Fig. 3. Expanding Semantic Inversion. To overcome the preference to low-level appearance features, we apply layer-wise identifier tokens to increase the accommodation of various features. Specifically, for the l-th layer where l ∈ [1, L] and L is the number of cross-attention layers in the T2I model, a new identifier token v l ∈ R d is initialized. Feeding the prompt with v l into the text encoder, the output tokens y l control the update of the latents in the l-th layer, thus influencing the generation of the visual content. And the learned tokens from all layers can form a token set V, which can then be paired with different subjects for generation. Rather than having a single identifier token take on the responsibility of reconstruction, having separate identifiers at different layers effectively ensures that more features are converted, including the action-related features we care about.\nLearning Gradient Mask with Context-Different Pair. The next step is to prevent the identifiers from inverting fea-tures that are not relevant to the action and thus contaminating the subsequent image generation. Given x (a,c) ∈ X as an anchor sample, where a denotes the specific action, and c denotes the action-agnostic context contained in the image including human appearance and background, we can randomly sample another image x (a,c) from X , where c represents that the context is different from c. Taking the context-different pair x (a,c) and x (a,c) as the input, we can calculate two gradients of the denoising loss L with respect to the identifier token v:\ng (a,c) = ∂L (a,c) ∂v ,(2)\ng (a,c) = ∂L (a,c) ∂v .(3)\nNote that the subscript l is omitted for the sake of uniformity and clarity. Each identifier token contains multiple channels, each carrying semantically distinct and independent information. And the gradient consistency of a channel indicates that the channel is likely to carry information about the specific action. Therefore, we calculate the absolute value of the difference between the two gradients:\n△g c = |g (a,c) -g (a,c) |,(4)\nwhere the semantic channels with a small difference can be regarded as action-related channels of the action a, which are expected to be preserved. Specifically, we sort the difference from the largest to the smallest, taking the value at β percent γ β as a threshold. In other words, β% of the channels are masked. Then, the mask that shares the same dimension as v can be calculated. For the k-th channel,\nm c k = 0, △g c k ⩾ γ β 1, △g c k < γ β .\n(5)\nBy overwriting the mask to the gradient of the anchor sample, the action-related knowledge is preserved and incorporated into the update of v, while the updates on actionagnostic channels are ignored. Note that since the specific visual invariance about the action changes slightly depending on the sample pair, the masked channels may not be exactly the same each time. Furthermore, both samples use the prompt of the anchor sample x (a,c) when calculating the gradients. Since the visual context of x (a,c) is inconsistent with the description in the prompt, the reconstruction loss favours larger gradients in the context-related channels. In this way, the action-related channels found through the threshold will be more accurate.\nLearning Gradient Mask with Action-Different Pair. Although the context-different pairs have the same action semantics, there may be differences in the visualization of the actions, and therefore the channels associated with the most representative action features do not necessarily have a smaller gradient difference. Since learning the gradient mask with the context-different only pair is not stable and effective enough, we also construct action-different pairs to generate the gradient mask from another perspective. For each sample x (a,c) in X , we can use it to quickly train a subject-driven customization model (e.g., DreamBooth) that effectively inverts the most of the low-level context information. Therefore, by filling the prompt template of x (a,c) with action descriptions that are different from a, the trained customization model can generate various action images as X (a,c) . Note that this step is not necessary if the users can compile a dataset of varied actions by the same individual using pre-captured images. However, the data collection is usually arduous and lengthy, making fast training of a subject-driven customization a more convenient solution. Due to the one-shot training and the concise text, the generated images X (a,c) may not be consistent with the action descriptions, or the context may differ from the original x (a,c) , but in practice, we have found that the quality is sufficient to diversify the action variation. In this way, when x (a,c) is sampled during training, we can randomly sample a image x (a,c) from X (a,c) to construct the action-different pair. And the gradient of x (a,c) with respect to the token v can be calculated as\ng (a,c) = ∂L (a,c) ∂v .(6)\nSimilarly, both samples use the prompt of the anchor sample x (a,c) . We can also calculate the absolute value of the gradient difference:\n△g a = |g (a,c) -g (a,c) |,(7)\nwhere the semantic channels with small difference can be regarded as context-related channels of the action a, which are expected to be masked. Therefore, we have\nm a k = 0, △g a k < λ β 1, △g a k ⩾ λ β ,(8)\nwhere λ β is the threshold here to mask β% of the channels. Merging Gradient Masks for Context. Due to the noise introduced by context variations, identifying action-relevant channels using only context-different or action-different pairs would be difficult and unreliable. As an evidence, the average overlap rate of channels preserved by both masks at each training step is 30.26%. Therefore, given the input triple I = x (a,c) , x (a,c) , x (a,c) , we can merge the two obtained masks m a and m c to get the final context mask m.\nIn practice, we keep only the intersection of the unmasked channels as unmasked, as we find this merging strategy performs better. Formally, we have\nm = m c ∩ m a ,(9)\nwhich is overwritten to the gradient of the anchor sample:\ng (a,c) = m ⊙ g (a,c) . (10\n)\n\"David Beckham <A>\" \"A tiger <A>\" \"A panda <A>\"\n\"Spiderman <A>\"\n\"Batman <A>\"\n\"A gorilla <A>\"" }, { "figure_ref": [], "heading": "Stable Diffusion ControlNet DreamBooth Textual Inversion ReVersion P+", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Ours", "publication_ref": [], "table_ref": [], "text": "Sample Image Custom Diffusion Note that the masked gradient g (a,c) , where action-agnostic channels are considered to be masked, is the only gradient used to update v. Therefore, our identifiers can adequately invert action-related features." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experiment Setup", "publication_ref": [ "b6", "b8", "b10", "b26" ], "table_ref": [], "text": "Baselines. For the baselines included in the comparison, we select Stable Diffusion [22], ControlNet [35], Dream-Booth [25], Textual Inversion [5], ReVersion [7], Custom Diffusion [9] and P+ [30]. Implementation Details. For ADI, we set the masking ratio β to 0.6 and use the AdamW [11] optimizer with a learning rate of 2e-4, while the training takes 3000 steps. For efficiency, DreamBooth for action-different pairs does not generate class-preservation images. While only one image is used for training, the initial learning rate is 1e-6, and the training takes 2000 steps. For a fair comparison, we use 50 steps of the DDIM [27] sampler with a scale of 7.5 for all methods. Unless otherwise specified, Stable Diffusion v2-1-base is selected as the default pre-trained model, and images are generated at a resolution of 512×512. All experiments are conducted on a NVIDIA A100 GPU." }, { "figure_ref": [], "heading": "Quantitative Comparison", "publication_ref": [], "table_ref": [], "text": "We perform the quantitative comparison with human evaluators to assess the quantitative performance. For each subject-action pair, four images are randomly sampled from the images generated by different methods. Given (1) the exemplar images of a specific action, and (2) the textual name of the subjects, human evaluators are asked to determine whether (1) the generated action is consistent with those in the exemplar images, and (2) the generated character corresponds with the name without obvious deformations, defects, or abnormalities. A generated image will only be considered totally correct if both the action and the character are correctly generated.\nTab. 1 reports the action, subject and total accuracy for all methods. Some observations are worth highlighting: (1) Given the textual descriptions of the actions, Stable Diffusion yields the highest total accuracy of all baseline meth- ods. This suggests that the existing baselines do not take full advantage of the exemplar images.\n(2) Despite relying on the skeleton as the condition to improve the action generation, ControlNet fail to maintain the performance of subject generation, resulting in an unsatisfactory total accuracy.\n(3) The action accuracy of DreamBooth, Textual Inversion, and ReVersion is incredibly low, reflecting their complete failure to invert the action-related features. (4) Custom Diffusion and P+ improve action accuracy at more or less the expense of subject accuracy. (5) Attribute to the extended semantic conditioning space and the gradient masking strategy, our ADI dramatically improves the accuracy of action generation while maintaining excellent subject accuracy. As a result, ADI achieves the best total accuracy, outperforming the baselines by 23.92%." }, { "figure_ref": [ "fig_1" ], "heading": "Qualitative Comparison", "publication_ref": [], "table_ref": [], "text": "Fig. 4 illustrates the qualitative comparison of all methods involved. It can be observed that although text descriptions of the actions are provided, the actions generated by Stable Diffusion still differ from the examples. ControlNet can only maintain a rough consistency in posture and struggles to match the generated subjects to the desired requirements, resulting in incomplete or distorted body structures, while sacrificing diversity. And the subject-driven customization methods, as discussed earlier, fail to generate the actions or exhibit appearance characteristics that differ from the specified subjects. This suggests that they are unable to convert only the features associated with the actions. Giving the credit to the design from a perspective of gradient, our ADI decouples action-related features from action-agnostic information and blocks the inversion of the latter. This allows ADI to effectively model the invariance of the action and transfer it to different characters and animals without sacrificing image quality and variety." }, { "figure_ref": [ "fig_2" ], "heading": "Ablation Study", "publication_ref": [ "b0" ], "table_ref": [], "text": "We conduct ablation experiments on ActionBench to verify the individual effects of the proposed contributions. From the generation results in Fig. 5, it can be observed that (1) The removal of the extension to the semantic conditioning space diminishes the inversion ability of ADI. (2) Both the gradient masks learned from the context-different and the action-different pairs are essential. Removing either one can lead to inadequate learning of action knowledge or a degradation in the quality of the subject's appearance. We attribute this to the fact that learning from a single pair is inherently noisy due to varied action visuals and the interference of action-irrelevant information.\n(3) We also attempt to reverse the gradient masks, i.e., updates to channels that should have been masked are preserved, and updates to other channels are cancelled. Obviously, this will result in action-related features not being inverted." }, { "figure_ref": [ "fig_3", "fig_4", "fig_7" ], "heading": "Further Analysis", "publication_ref": [], "table_ref": [], "text": "Impact of Masking Strategy. To validate the masking strategy in our ADI, we compare it with four other strategies in Fig. 6. Specifically, on the gradients for each update:\n(1) Uniform: we uniformly mask β percent of channels. (2) Random: we randomly mask β percent of channels. (3) Min: we mask β percent of channels with the lowest value. (4) Max: we mask β percent of channels with the highest value. We observe that none of these four strategies successfully captures high-level features related to actions, since the images they generate are independent of the specified action. And the comparison also shows that the effectiveness of our ADI not only depends on the masking itself, but also requires learning action-agnostic channels by modeling the invariance of action and context. Impact of Gradient Mask Merging Strategy. As shown in Eq. ( 9), ADI takes the intersection of the two gradient masks as the default merging strategy. We compare this with selecting the union of the two masks, and illustrate the results in Fig. 7. Since only channels that are preserved on both masks are updated, taking the intersection can effectively filter out action-agnostic features, leading to better customization of the actions. In contrast, taking the intersection may dilute the most representative action features due to the preserved context information. Impact of Masking Ratio β. In Fig. 8, we vary the masking ratio β from 0.2 to 0.8. When β is small, fewer dimensions of the gradient are masked, and more action-agnostic features are retained to hinder the generation of the subject's appearance. This situation improves as β is gradually increased. However, when β is relatively large, due to the large number of masked dimensions, some of the most dis- criminative features of actions may not be inverted, resulting in incomplete learning of actions. Note that the optimal value of β may be different for different actions.\ng 0 z H B 9 S h U R E H N v R 6 G b m t 5 9 Q a Z b I e z N O M R R k I F n M K D F W e u x G a M i 1 7 1 V 7 5 Y r v + X O 4 q y T I S Q V y N H r l r 2 4 / o Z l A a S g n W n c C P z X h h C j D K M d p q Z t p T A k d k Q F 2 L J V E o A 4 n 8 4 O n 7 p l V + m 6 c K F v S u H P 1 9 8 S E C K 3 H I r K d g p i h X v Z m 4 n 9 e J z P x V T h h M s 0 M S r p Y F G f c N Y k 7 + 9 7 t M 4 X U 8 L E l h C p m b 3 X p k C h C j c 2 o Z E M I l l 9 e J a 2 q F 1 x 4 t b t a p e 7 n c R T h B E 7 h H A K 4 h D r c Q g O a Q E H A M 7 z C m 6 O c F + f d + V i 0 F p\nU i t s b l y I h X N K C F V 6 M X z U h f s Q = \" > A A A B 8 H i c b V B N S 8 N A E N 3 U r 1 q / q h 6 9 L B b B U 0 h E q h e h 4 M V j B f s h b S i b 7 a R d u p u E 3 Y l Q S n + F F w + K e P X n e P P f u G 1 z 0 N Y H A 4 / 3 Z p i Z F 6 Z S G P S 8 b 6 e w t r 6 x u V X c L u 3 s 7 u 0 f l A + P m i b J N I c G T 2 S i 2 y E z I E U M D R Q o o Z 1 q Y C q U 0 A p H t z O / 9 Q T a i C R + w H E K g W K D W E S C" }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we investigate an under-explored text-toimage generation task, namely action customization. To understand the challenge of the task, we first visualize the inadequacy of existing subject-driven methods in extracting action-related features from the entanglement of actionagnostic context features. Then, we propose a novel method named ADI to learn action-specific identifiers from the given images. To increase the accommodation of knowledge relevant to the action, ADI extends the inversion process with layer-wise identifier tokens. Furthermore, ADI generates gradient masks to block the contamination of action-agnostic features at the gradient level. We also contribute the ActionBench for evaluating performance on the task. Since there is a growing need to synthesize actionspecific images with various new subjects, we hope that our work can highlight this important direction." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgement This work was supported by STI 2030-Major Projects (2022ZD0208800), NSFC General Program (Grant No. 62176215). This work was supported by Alibaba Group through Alibaba Research Intern Program." }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Benchmark Details", "publication_ref": [], "table_ref": [], "text": "In this section, we describe the presented ActionBench in detail. The full benchmark will be publicly available." }, { "figure_ref": [], "heading": "A.1. Actions", "publication_ref": [], "table_ref": [], "text": "We define eight diverse, unique and representative actions as follows:\n• salute: \"salutes\" • gesture: \"raises one finger\" • cheer: \"raises both arms for cheering\" • pray: \"has hands together in prayer\" • sit: \"sits\" • squat: \"squats\" • meditate: \"meditates\" • handstand: \"performs a handstand\" where the action categories (displayed in boldface) are used only to distinguish between actions, and the actions can be best described with the exemplar images. And the text descriptions (displayed in italics) that are used for Stable Diffusion are obtained using an image captioning model." }, { "figure_ref": [], "heading": "A.2. Subjects", "publication_ref": [], "table_ref": [], "text": "We provide 23 subjects for evaluation as follows:\n• generic human: \"A boy\", \"A girl\", \"A man\", \"A woman\", \"An old man\" • well-known personalities: \"Barack Obama\", \"Michael Jackson\", \"David Beckham\", \"Leonardo DiCaprio\", \"Messi\", \"Spiderman\", \"Batman\" • animals: \"A dog\", \"A cat\", \"A lion\", \"A tiger\", \"A bear\", \"A polar bear\", \"A fox\", \"A cheetah\", \"A monkey\", \"A gorilla\", \"A panda\" where diverse and unseen subjects and the introduction of animals demand that, models not only retain pre-trained knowledge without forgetting, but also accurately generate animal representations without distortion or anomalies. " }, { "figure_ref": [], "heading": "B. Baseline Details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C. Additional Experimental Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C.1. Comparison with Action-Prior DreamBooth", "publication_ref": [], "table_ref": [], "text": "Our ADI utilizes the generated action-different samples with the same context to capture the context-related features. To analyze the advantages of controlling updates with these data rather than directly employing them in training, we present a new baseline named action-prior DreamBooth, which replaces the class prior generated by original Stable Diffusion with these action-different samples. Therefore, in addition to the inherent action invariance, contextual invariance also emerges in the training data. However, as shown in Fig. 9, this new baseline still struggles with inverting action-specific features. This observation suggests a lack of ability to capture high-level invariance." }, { "figure_ref": [], "heading": "C.2. Generalization Across Diverse Styles", "publication_ref": [], "table_ref": [], "text": "ADI is designed to separate and inverse abstract the action concepts from the details of subjects and objects, background, color, or style in user images. This allows the generation images to generalize to specific styles through prompting, shown as Fig. 10.\n\"anime style\" \"cartoon style\" \"Picasso style\" \"oil painting style\" \"sketching style\" Figure 10. ADI can generate images with different styles by prompting. The original prompt is \"A girl <A>\" where \"<A>\" represents the action pray." }, { "figure_ref": [], "heading": "C.3. Visualization of Cross-Attention Maps", "publication_ref": [], "table_ref": [], "text": "To explain why certain channels can be interpreted as \"action-related\", we visualize the cross-attention maps related to the learned identifiers in Fig. 11. As observed, the learned identifiers focus more on the contour information of the actions rather than the human body. This indicates that ADI avoids reversion on appearance information, thereby enabling generalization to different subjects. " }, { "figure_ref": [], "heading": "C.5. Additional Qualitative Results", "publication_ref": [], "table_ref": [], "text": "To show the effectiveness of ADI, we illustrate additional generation results in Fig. 13, covering all actions within Ac-tionBench. The generated images maintain the same action while offering a rich diversity, indicating that the learned identifiers contain solely action information and do not encapsulate irrelevant contextual details such as background, appearance, or even orientation. " }, { "figure_ref": [], "heading": "Sample Image", "publication_ref": [], "table_ref": [], "text": "Generated Images" } ]
gorilla <A>" "A bear <A>" "A panda <A>" Sample Images "An old man <A>" "Batman <A>" <A> "Barack Obama <A>" "A monkey <A>" "A polar bear <A>" "A cat <A>" Sample Images <A> "A woman <A>" "David Beckham <A>" "Michael Jackson <A>" "A dog <A>" "A fox <A>" "A cheetah <A>" Sample Images Figure 1. Action customization results of our ADI method. By inverting representative action-related features, the learned identifiers "<A>" can be paired with a variety of characters and animals to contribute to the generation of accurate, diverse and high-quality images.
Learning Disentangled Identifiers for Action-Customized Text-to-Image Generation
[ { "figure_caption": "Figure 3 .3Figure3. Overview of our ADI method. ADI learns more efficient action identifiers by extending the semantic conditioning space and masking gradient updates to action-agnostic channels.", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Visual comparisons of all methods.For each action, we present the generated results showcasing its pairing with a human character and an animal.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Ablation study. We remove or revise one implementation at a time to demonstrate the effects of the identifier extension and the gradient masking.", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Visual comparison of masking strategies. The four compared strategies fail to mask the updates from agnostic channels and invert the action-related features.", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Effect of gradient mask merging strategy. Preserving the intersection of gradient masks can better invert the representative action features.", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" P W K u c w 9 u p b Y Q b H C T o u G s 3 3 x y d t Q = \" > A A A B 8 H i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 B I v g K S S l q B e h 4 M V j B d s q b S i b 7 a R d u r s J u x u h l P 4 K L x 4 U 8 e r P 8 e a / c d v m o K 0 P B h 7 v z T A z L 0 o 5 0 8 b 3 v 5 3 C 2 v r G 5 l Z x u 7 S z u 7 d / U D 4 8 a u k k U x S b N O G J e o i I R s 4 k N", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "x 8 5 h j + w P n 8 A Z U N j 5 A = < / l a t e x i t > = 0.2 < l a t e x i t s h a 1 _ b a s e 6 4 = \" d 5 8 x Q e C 2 8 G W h Z n T y 3 N a j 2 m m 3 W Z I = \" > A A A B 8 H i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 i k q B e h 4 M V j B f s h b S i b 7 a Z d u p u E 3 Y l Q S n + F F w + K e P X n e P P f u G 1 z 0 N Y H A 4 / 3 Z p i Z F 6 Z S G P S 8 b 6 e w t r 6 x u V X c L u 3 s 7 u 0 f l A + P m i b J N O M N l s h E t 0 N q u B Q x b 6 B A y d u p 5 l S F k r f C 0 e 3 M b z 1 x b U Q S P + A 4 5 Y G i g 1 h E g l G 0 0 m M 3 5 E h v P L f a K 1 c 8 1 5 u D r B I / J x X I U e + V v 7 r 9 h G W K x 8 g k N a b j e y k G E 6 p R M M m n p W 5 m e E r Z i A 5 4 x 9 K Y K m 6 C y f z g K T m z S p 9 E i b Y V I 5 m r v y c m V B k z V q H t V B S H Z t m b i f 9 5 n Q y j 6 2 A i 4 j R D H r P F o i i T B B M y + 5 7 0 h e Y M 5 d g S y r S w t x I 2 p J o y t B m V b A j + 8 s u r p H n h + p d u 9 b 5 a q X l 5 H E U 4 g V M 4 B x + u o A Z 3 U I c G M F D w D K / w 5 m j n x X l 3 P h a t B S e f O Y Y / c D 5 / A J g V j 5 I = < / l a t e x i t > = 0.4 < l a t e x i t s h a 1 _ b a s e 6 4 = \" h b k", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Effect of the masking ratio β. A value close to can balance the inversion of action-related features and the removal of action-agnostic features.", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Quantitative comparisons with competing methods. Action, subject and total accuracies (%) are reported.", "figure_data": "MethodsAction Subject TotalStable Diffusion [22] 30.7184.5127.17ControlNet [35]41.3042.6619.29DreamBooth [25]2.4595.652.45Textual Inversion [5]2.1786.141.90ReVersion [7]1.6384.511.63Custom Diffusion [9] 29.6253.537.07P+ [30]26.9080.1620.92ADI (Ours)60.3385.8751.09", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" } ]
Siteng Huang; Biao Gong; Yutong Feng; Xi Chen; Yuqian Fu; Yu Liu; Donglin Wang
[ { "authors": "Zhe Cao; Gines Hidalgo; Tomas Simon; Shih-En Wei; Yaser Sheikh", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b0", "title": "OpenPose: Realtime multi-person 2d pose estimation using part affinity fields", "year": "2021" }, { "authors": "Prafulla Dhariwal; Alexander Quinn; Nichol ", "journal": "", "ref_id": "b1", "title": "Diffusion models beat GANs on image synthesis", "year": "2021" }, { "authors": "Ming Ding; Zhuoyi Yang; Wenyi Hong; Wendi Zheng; Chang Zhou; Da Yin; Junyang Lin; Xu Zou; Zhou Shao; Hongxia Yang; Jie Tang", "journal": "", "ref_id": "b2", "title": "CogView: Mastering text-toimage generation via transformers", "year": "2021" }, { "authors": "Oran Gafni; Adam Polyak; Oron Ashual; Shelly Sheynin; Devi Parikh; Yaniv Taigman", "journal": "", "ref_id": "b3", "title": "Make-A-Scene: Scenebased text-to-image generation with human priors", "year": "2022" }, { "authors": "Rinon Gal; Yuval Alaluf; Yuval Atzmon; Or Patashnik; Amit Haim Bermano; Gal Chechik; Daniel Cohen-Or", "journal": "", "ref_id": "b4", "title": "An image is worth one word: Personalizing text-to-image generation using textual inversion", "year": "2023" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "", "ref_id": "b5", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Ziqi Huang; Tianxing Wu; Yuming Jiang; Kelvin C K Chan; Ziwei Liu", "journal": "", "ref_id": "b6", "title": "ReVersion: Diffusion-based relation inversion from images", "year": "2023" }, { "authors": "P Diederik; Max Kingma; Welling", "journal": "", "ref_id": "b7", "title": "Auto-encoding variational bayes", "year": "2014" }, { "authors": "Nupur Kumari; Bingliang Zhang; Richard Zhang; Eli Shechtman; Jun-Yan Zhu", "journal": "", "ref_id": "b8", "title": "Multi-concept customization of text-to-image diffusion", "year": "2023" }, { "authors": "Yuheng Li; Haotian Liu; Qingyang Wu; Fangzhou Mu; Jianwei Yang; Jianfeng Gao; Chunyuan Li; Yong Jae Lee", "journal": "", "ref_id": "b9", "title": "GLIGEN: Open-set grounded text-to-image generation", "year": "2023" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b10", "title": "SGDR: Stochastic gradient descent with warm restarts", "year": "2017" }, { "authors": "Liqian Ma; Xu Jia; Qianru Sun; Bernt Schiele; Tinne Tuytelaars; Luc Van Gool", "journal": "", "ref_id": "b11", "title": "Pose guided person image generation", "year": "2017" }, { "authors": "Yifang Men; Yiming Mao; Yuning Jiang; Wei-Ying Ma; Zhouhui Lian", "journal": "", "ref_id": "b12", "title": "Controllable person image synthesis with attribute-decomposed GAN", "year": "2020" }, { "authors": "Chong Mou; Xintao Wang; Liangbin Xie; Jian Zhang; Zhongang Qi; Ying Shan; Xiaohu Qie", "journal": "", "ref_id": "b13", "title": "T2I-Adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models", "year": "2023" }, { "authors": "Alexander Quinn; Nichol ; Prafulla Dhariwal; Aditya Ramesh; Pranav Shyam; Pamela Mishkin; Bob Mcgrew; Ilya Sutskever; Mark Chen", "journal": "", "ref_id": "b14", "title": "GLIDE: Towards photorealistic image generation and editing with text-guided diffusion models", "year": "2022" }, { "authors": " Openai", "journal": "", "ref_id": "b15", "title": "", "year": "2023" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark; Gretchen Krueger; Ilya Sutskever", "journal": "", "ref_id": "b16", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Aditya Ramesh; Mikhail Pavlov; Gabriel Goh; Scott Gray; Chelsea Voss; Alec Radford; Mark Chen; Ilya Sutskever", "journal": "", "ref_id": "b17", "title": "Zero-shot text-to-image generation", "year": "2021" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b18", "title": "Hierarchical text-conditional image generation with CLIP latents", "year": "2022" }, { "authors": "Scott E Reed; Zeynep Akata; Xinchen Yan; Lajanugen Logeswaran; Bernt Schiele; Honglak Lee", "journal": "", "ref_id": "b19", "title": "Generative adversarial text to image synthesis", "year": "2016" }, { "authors": "Xiaoming Yurui Ren; Junming Yu; Thomas H Chen; Ge Li; Li", "journal": "", "ref_id": "b20", "title": "Deep image spatial transformation for person image generation", "year": "2020" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b21", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Olaf Ronneberger; Philipp Fischer; Thomas Brox", "journal": "", "ref_id": "b22", "title": "U-Net: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "Prasun Roy; Subhankar Ghosh; Saumik Bhattacharya; Umapada Pal; Michael Blumenstein", "journal": "", "ref_id": "b23", "title": "TIPS: text-induced pose synthesis", "year": "2022" }, { "authors": "Nataniel Ruiz; Yuanzhen Li; Varun Jampani; Yael Pritch; Michael Rubinstein; Kfir Aberman", "journal": "", "ref_id": "b24", "title": "DreamBooth: Fine tuning text-to-image diffusion models for subject-driven generation", "year": "2023" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily L Denton; Seyed Kamyar; Seyed Ghasemipour; Raphael Gontijo Lopes; Burcu Karagol Ayan; Tim Salimans; Jonathan Ho; David J Fleet; Mohammad Norouzi", "journal": "", "ref_id": "b25", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "", "ref_id": "b26", "title": "Denoising diffusion implicit models", "year": "2021" }, { "authors": "Ming Tao; Hao Tang; Fei Wu; Xiaoyuan Jing; Bing-Kun Bao; Changsheng Xu", "journal": "", "ref_id": "b27", "title": "DF-GAN: A simple and effective baseline for text-to-image synthesis", "year": "2022" }, { "authors": "Narek Tumanyan; Michal Geyer; Shai Bagon; Tali Dekel", "journal": "", "ref_id": "b28", "title": "Plug-and-play diffusion features for text-driven image-to-image translation", "year": "2023" }, { "authors": "Andrey Voynov; Qinghao Chu; Daniel Cohen-Or; Kfir Aberman", "journal": "", "ref_id": "b29", "title": "P+: Extended textual conditioning in text-toimage generation", "year": "2023" }, { "authors": "Tao Xu; Pengchuan Zhang; Qiuyuan Huang; Han Zhang; Zhe Gan; Xiaolei Huang; Xiaodong He", "journal": "", "ref_id": "b30", "title": "AttnGAN: Finegrained text to image generation with attentional generative adversarial networks", "year": "2018" }, { "authors": "Xiaogang Xu; Ying-Cong Chen; Xin Tao; Jiaya Jia", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b31", "title": "Textguided human image manipulation via image-text shared space", "year": "2022" }, { "authors": "Lingbo Yang; Pan Wang; Chang Liu; Zhanning Gao; Peiran Ren; Xinfeng Zhang; Shanshe Wang; Siwei Ma; Xian-Sheng Hua; Wen Gao", "journal": "IEEE Transactions on Image Processing", "ref_id": "b32", "title": "Towards fine-grained human pose transfer with detail replenishing network", "year": "2021" }, { "authors": "Jiahui Yu; Yuanzhong Xu; Jing Yu Koh; Thang Luong; Gunjan Baid; Zirui Wang; Vijay Vasudevan; Alexander Ku; Yinfei Yang; Burcu Karagol Ayan; Ben Hutchinson; Wei Han; Zarana Parekh; Xin Li; Han Zhang; Jason Baldridge; Yonghui Wu", "journal": "Transactions on Machine Learning Research", "ref_id": "b33", "title": "Scaling autoregressive models for content-rich text-to-image generation", "year": "2022" }, { "authors": "Lvmin Zhang; Maneesh Agrawala", "journal": "", "ref_id": "b34", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" }, { "authors": "Pengze Zhang; Lingxiao Yang; Jianhuang Lai; Xiaohua Xie", "journal": "", "ref_id": "b35", "title": "Exploring dual-task correlation for pose guided person image generation", "year": "2022" }, { "authors": "Shengjia Zhao; Jiaming Song; Stefano Ermon", "journal": "", "ref_id": "b36", "title": "Info-VAE: Balancing learning and inference in variational autoencoders", "year": "2019" }, { "authors": "Minfeng Zhu; Pingbo Pan; Wei Chen; Yi Yang", "journal": "", "ref_id": "b37", "title": "DM-GAN: Dynamic memory generative adversarial networks for text-to-image synthesis", "year": "2019" } ]
[ { "formula_coordinates": [ 3, 434.16, 512.18, 97.16, 10.32 ], "formula_id": "formula_0", "formula_text": "X = {x 1 , x 2 , • • • , x N }," }, { "formula_coordinates": [ 4, 65.71, 699.73, 216.78, 14.11 ], "formula_id": "formula_1", "formula_text": "L = E z∼E(x),y,ϵ∼N (0,1),t ∥ϵ -ϵ θ (z t , t, y)∥ 2 2 , (1" }, { "formula_coordinates": [ 4, 282.49, 703.02, 3.87, 8.64 ], "formula_id": "formula_2", "formula_text": ")" }, { "formula_coordinates": [ 5, 133.13, 196.81, 153.23, 24.44 ], "formula_id": "formula_3", "formula_text": "g (a,c) = ∂L (a,c) ∂v ,(2)" }, { "formula_coordinates": [ 5, 133.13, 223.21, 153.23, 24.44 ], "formula_id": "formula_4", "formula_text": "g (a,c) = ∂L (a,c) ∂v .(3)" }, { "formula_coordinates": [ 5, 120.26, 348.35, 166.1, 11.37 ], "formula_id": "formula_5", "formula_text": "△g c = |g (a,c) -g (a,c) |,(4)" }, { "formula_coordinates": [ 5, 113.78, 454.73, 107.72, 24.6 ], "formula_id": "formula_6", "formula_text": "m c k = 0, △g c k ⩾ γ β 1, △g c k < γ β ." }, { "formula_coordinates": [ 5, 391.88, 352.22, 153.23, 24.44 ], "formula_id": "formula_7", "formula_text": "g (a,c) = ∂L (a,c) ∂v .(6)" }, { "formula_coordinates": [ 5, 378.63, 420.29, 166.48, 11.37 ], "formula_id": "formula_8", "formula_text": "△g a = |g (a,c) -g (a,c) |,(7)" }, { "formula_coordinates": [ 5, 371.51, 479.61, 173.6, 24.6 ], "formula_id": "formula_9", "formula_text": "m a k = 0, △g a k < λ β 1, △g a k ⩾ λ β ,(8)" }, { "formula_coordinates": [ 5, 395.87, 662.25, 149.25, 11.37 ], "formula_id": "formula_10", "formula_text": "m = m c ∩ m a ,(9)" }, { "formula_coordinates": [ 5, 386.42, 698.54, 154.54, 11.37 ], "formula_id": "formula_11", "formula_text": "g (a,c) = m ⊙ g (a,c) . (10" }, { "formula_coordinates": [ 5, 540.96, 700.93, 4.15, 8.64 ], "formula_id": "formula_12", "formula_text": ")" }, { "formula_coordinates": [ 8, 373.67, 236.84, 14.94, 8.64 ], "formula_id": "formula_13", "formula_text": "g 0 z H B 9 S h U R E H N v R 6 G b m t 5 9 Q a Z b I e z N O M R R k I F n M K D F W e u x G a M i 1 7 1 V 7 5 Y r v + X O 4 q y T I S Q V y N H r l r 2 4 / o Z l A a S g n W n c C P z X h h C j D K M d p q Z t p T A k d k Q F 2 L J V E o A 4 n 8 4 O n 7 p l V + m 6 c K F v S u H P 1 9 8 S E C K 3 H I r K d g p i h X v Z m 4 n 9 e J z P x V T h h M s 0 M S r p Y F G f c N Y k 7 + 9 7 t M 4 X U 8 L E l h C p m b 3 X p k C h C j c 2 o Z E M I l l 9 e J a 2 q F 1 x 4 t b t a p e 7 n c R T h B E 7 h H A K 4 h D r c Q g O a Q E H A M 7 z C m 6 O c F + f d + V i 0 F p" }, { "formula_coordinates": [ 8, 465.38, 236.84, 8.67, 6.78 ], "formula_id": "formula_14", "formula_text": "U i t s b l y I h X N K C F V 6 M X z U h f s Q = \" > A A A B 8 H i c b V B N S 8 N A E N 3 U r 1 q / q h 6 9 L B b B U 0 h E q h e h 4 M V j B f s h b S i b 7 a R d u p u E 3 Y l Q S n + F F w + K e P X n e P P f u G 1 z 0 N Y H A 4 / 3 Z p i Z F 6 Z S G P S 8 b 6 e w t r 6 x u V X c L u 3 s 7 u 0 f l A + P m i b J N I c G T 2 S i 2 y E z I E U M D R Q o o Z 1 q Y C q U 0 A p H t z O / 9 Q T a i C R + w H E K g W K D W E S C" } ]
2024-03-29
[ { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b8", "b11", "b85", "b89", "b22", "b39", "b82", "b91", "b9", "b47", "b19", "b67", "b82", "b86", "b78", "b97", "b81", "b83", "b14", "b59", "b90", "b18", "b38", "b75", "b40", "b79", "b50", "b57", "b72", "b79", "b0", "b23", "b26" ], "table_ref": [], "text": "Video object tracking [9,12,86,90] is a fundamental task in computer vision with wide-ranging applications spanning from surveillance [89] to augmented reality [23], where accuracy and robustness are paramount. While traditional RGB trackers have shown promise in general settings, they often struggle with challenging scenarios such as occlusions [40,47], low visibility [83,92], or fast-moving objects [10,48]. For more reliable tracking under such challenging conditions, the integration of auxiliary modalities (X) like depth [20,68], thermal [83,87], and event [79,98] have proven effective in multimodal tracking.\nWhile the idea of fusing RGB with other modalities holds promise [82,84], the main challenge is the discrepancy in the representation of information across different modalities. Despite the success of previous fusion works [15,60] tailored for each specific scenario to improve RGB trackers, their reliance on modality-specific designs limits adaptability. Recent initiatives [69,91] towards a uniform architecture for various modalities show promise but necessitate modality-specific fine-tuning. This approach leads to multiple parameter sets, as shown in Fig. 1(a), thereby compromising practicality in diverse real-world applications.\nIn this work, we aim to avoid such modality-specific fine-tuning to keep only one model-parameter set at all times. Another practical constraint arises from the differ-ences in available auxiliary modalities across settings. Unifying modalities by a common representation can handle any modality at its disposal, addressing the mentioned problems by its virtue. However, two additional multifaceted challenges emerge from the scarcity of multimodal datasets and the absence of all paired data combinations. The former makes cross-modal mappings through a large-scale data prior unfeasible [19,39,76], while the latter leads to missing modalities and renders joint learning using all possible combinations of paired data unfeasible [41,80].\nTo achieve such a unification, in this paper, we present Un-Track, denoting \"one\" in French, which learns a cohesive embedding across diverse input modalities. Unlike previous approaches [51,58,73,80], Un-Track relies solely on RGB-X pairs for training, with X representing auxiliary modalities, without the need for all modalities to co-occur. Our objective is to discover a shared embedding seamlessly binding all auxiliary modalities (as depicted in Fig. 1(b)). More specifically, we leverage the factorization prior, allowing reasoning about a common embedding directly from the low-rank latent space. Factorization is a simple composition prior with the assumption that the global approximation can be constructed from a set of subset vectors. The factorization prior, successfully utilized in previous studies [1,4,24], is employed in our work to reconstruct a shared embedding. This process transforms the heterogeneous modal representation into a uniform one, thereby facilitating the emergent cross-modal alignment.\nMoreover, to harness the full potential of auxiliary inputs while maintaining efficiency, we leverage cross-modal features as prompts to enable RGB-X interaction. Different from previous works [25,27], our goal is to enhance less reliable tokens, defined by a learnable score function, using multimodal cues. We approach this as a token recovery problem and leverage low-rank factorization to achieve the goal, which is first suggested in multimodal fusion, to the best of our knowledge. With its unified model architecture and prompting blocks, Un-Track is the first to offer support for cross-modality alignment under a single architecture with uniform parameters. In comparison to our RGB baseline with 21.50 GFLOPs and 92M parameters, Un-Track introduces only +2.14 GFLOPs with +6.6M parameters, resulting in a significant +8.1 absolute F-score gain demonstrated on the DepthTrack dataset. Extensive comparisons across five datasets with different modalities validate Un-Track's superiority over specialized SOTA models, surpassing both unified trackers and modality-specific fine-tuned counterparts by a substantial margin." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b6", "b70", "b31", "b32", "b49", "b60", "b84", "b5", "b10", "b53", "b62", "b13", "b41", "b69", "b94", "b36", "b15", "b66", "b55", "b59", "b56", "b77", "b96", "b87", "b92", "b90", "b65", "b34", "b35", "b40", "b43", "b30", "b18", "b38", "b25" ], "table_ref": [], "text": "Multimodal tracking: Video object tracking [7,71] aims to detect objects in a video sequence based on their initial positions. Early approaches treated tracking as a per-frame target-matching problem, with Siamese networks [32,33,50,61,74,85] being a notable example. More recently, transformer-based methods [2,5,6,11,54,63] have gained popularity for feature extraction and per-frame correlation in tracking. Large-scale training datasets [14,22,42,45] have empowered RGB trackers to uniformly apply parameters across various applications. While RGB trackers deliver promising results, challenges such as occlusion, low illumination, and fast-moving scenes have led to the exploration of additional modalities. Several works have investigated the role of depth [70,95], thermal [37,38], and event modalities [49,55] in enhancing tracking performance. Specifically, depth cues [16,67] contribute to handling objects with different camera distances; thermal cameras [56,60] address challenges such as low illumination; event cameras improve the temporal awareness [57,78,97].\nDespite the plausible achievements, many rely on modality-specific blocks designed for individual modalities [88,93], limiting their adaptability. Recent efforts have focused on achieving architectural unification [69,91], yet they still necessitate modality-specific fine-tuning, resulting in distinct parameter sets for different modalities. The ideal scenario would involve a large-scale dataset encompassing all possible modal combinations, but current tracking datasets predominantly feature a single modality -depth [66,94], thermal [35,36], or event [55] -posing challenges for a unified model with a single parameter set. Learning with Missing modalities: Recent research has addressed real-world scenarios where models must cope with missing modalities [41,44]. One common strategy involves estimating missing values by learning joint multimodal representations [31,75], feasible when complete samples are available during training. However, tracking datasets typically exhibit only one modality at a time, complicating the learning of such joint representations. Other works [19,39] implicitly learn cross-modal alignment endto-end using large-scale datasets and deep networks, demanding substantial computational resources. Extending such approaches to tracking is challenging due to limited downstream datasets and real-world applicability constraints [26,64]. In contrast to existing methods, our approach investigates cross-modal relationships by leveraging edge priors to learn a joint representation that unifies all modalities. Our method does not require the simultaneous occurrence of all modalities, offering a unique perspective on dealing with diverse and individual modalities." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2" ], "heading": "Overall Framework", "publication_ref": [ "b30", "b20" ], "table_ref": [], "text": "In this paper, our primary focus is on multimodal tracking, with the constraint that only one modality is available at a time, as shown in Fig. 2. We define our multimodal tracking dataset as M = {M D , M T , M E }, where M X represents the subset dataset with only a single modality X available. Conventional methods tackling missing modalities often use dummy inputs for the absent pixels [31,53], simulating complete datasets with all possible combinations of paired datasets. In contrast, our method transforms any auxiliary input into a shared embedding, seamlessly binding all modalities together and creating a complete and paired representation with the master RGB feature.\nTo mitigate overfitting on sparse downstream multimodal datasets, we adopt a transformer-based RGB tracker with frozen parameters and fine-tune it for multimodal tracking. Leveraging a lightweight outer prompting method, we identify uncertain tokens at each scale and enhance them with cross-modal awareness. Simultaneously, an inner fine-tuning process is implemented using the LoRA technique [21].\nDuring training, our model learns the shared embedding from samples in the mixed dataset M, effectively binding all modalities together. As for inference, our model can accommodate any modal input X, thanks to the emergent alignment. Our trainable parameters only include cross-modal binding, outer prompting parameters, and inner LoRA parameters, ensuring a training-friendly pipeline that can be efficiently employed end-to-end on a single 24G GPU." }, { "figure_ref": [ "fig_2" ], "heading": "Shared Embedding", "publication_ref": [], "table_ref": [], "text": "Explicit Edge Awareness: We observe that, as illustrated in Fig. 2(a), depth data introduce 3D distance information, effectively delineating objects with varying granularity and enhancing the sharpness of 3D boundaries; thermal images generate a scene heat map, highlighting objects based on their temperatures and providing clearer contours; event data capture intensity changes, particularly around an object's outbound region. Notably, a consistent feature emerges across these modalities: the representation of the \"true\" objects' shape, often manifested through edges.\nMotivated by this observation, our objective is to harness edge embedding to unify all modalities. To achieve this, as shown in Fig. 3, we generate gradient maps from auxiliary modalities by computing differences between neighboring pixels along both the x-and y-axes. Simultaneously, without compromising texture edge, we also generate RGB gradient maps. Subsequently, all gradient maps are integrated with the visual feature, forming the gradient feature G. Implicit Low-Rank Reconstruction: While edges present a shared feature across different modalities, exclusively transforming all modalities into edges may risk overlooking modality-specific clues. Therefore, we introduce an implicit learning pipeline to complement this by discovering the shared embedding, guided by the previously generated explicit edge awareness. This combined approach allows for a more effective identification of the shared embedding, leveraging both data-driven learning and edge priors.\nIn technical terms, we redefine the challenge of learning the shared embedding as a quest for the shared low-rank approximation. Both objectives share the essence of distilling common features across all modalities. However, direct estimation of the low-rank approximation becomes impractical due to the distinct data domains and modal representations. In response, we propose a pragmatic strategy: decomposing the shared low-rank vector into the low-rank of each subset component. This alternative, more manageable and feasible within a single domain, lays the groundwork for approximating the global shared low-rank from these individual low-rank components.\nThe overall algorithm can be found in Algorithm 1. Specifically, let M be the input feature with mixed auxiliary modalities, decomposed into subset features D, T, E representing depth, thermal, and event samples from subset datasets M D , M T , M E . Their respective k th low-rank matrices D k , T k , E k , are approximated by:\nD k = σ d (D), T k = σ t (T ), E k = σ e (E),(1)\nwhere σ x is the modality-specific learning through a simple MLP projecting features from input channel c to the low- rank space k (k < c). Simultaneously, we compute the lowrank matrix G k from the gradient feature G.\nThe global shared low-rank matrix M k is then approximated by fusing the subset low-rank matrices D k , T k , E k , along with the gradient guidance. Technically, we concatenate D k , T k , E k and learn the joint low-rank approximation, incorporating it with the gradient guidance. This pipeline can be expressed as follows:\nM k = ϕ R 1 ([D k , T k , E k ]) + ϕ R 2 (G k ),(2)\nwhere [.] is the channel concatenation and ϕ R i are the MLP projections to the low-rank latent space. Finally, we reconstruct the shared embedding F through:\nF = Φ R (M k ) + G,(3)\nwhere Φ R is another MLP that projects the jointly-learned low-rank matrix back to the departing feature space. Our ablation studies validate that this subset low-rank approximation and regrouping efficiently unify all input modalities, despite their heterogeneous representations. " }, { "figure_ref": [ "fig_4" ], "heading": "Outer Modal Prompting", "publication_ref": [ "b20", "b65" ], "table_ref": [], "text": "RGB-tracker may fail to perform accurately in corner cases where auxiliary clues can contribute. Drawing inspiration from the success of adapting large pre-trained models to specific downstream tasks [21,25], we introduce a modal prompting method devised to enhance RGB token I with modality-awareness F, as shown in Fig. 4. Specifically, our approach employs a shrinkage token fusion strategy. Taking I as an example, we categorize tokens into three groups-negative, uncertain, and positive-based on a dynamic scoring function s. These regions are defined with mask form, expressed as m n , m u , m p = s(I). To harness multimodal clues effectively, we replace negative tokens with those from the other modality, omit uncertain ones with dummy values, and retain the positive tokens. Subsequently, these modified tokens undergo projection into the low-rank space using the approximation function σ c . Our objective is to enhance robustness by completing uncertain tokens with information from reliable neighboring tokens. Mathematically, we obtain the first low-rank matrix I l 1 by:\nI l 1 = σ c (m n • F + m p • I).(4)\nNext, we target the uncertain tokens by merging them with the paired tokens from the other modality and approximate the low-rank matrix similarly. Here, we aim to throw out the possible noise, resulting in a matrix that is more informative than the original. Let σ n be another approximation function, we obtain the second low-rank matrix I l n by:\nI l 2 = σ n (m u • F + m u • I).\n(5)\nThen, we fuse these two low-rank matrices in low-rank [66]. Red/Green/Blue stands for the best/second/third performance. Our depthspecific model sets the SOTA record, while our uni-model with single parameters set also outperforms previous depth-specific SOTA.\nspace, forming the shared low-rank matrix I l for input I. In such a manner, we improve the image token modeling by fully benefiting from the auxiliary clues. Mathematically, the whole process can be formulated as:\nI l = ϕ P ([I l 1 , I l 2 ]),(6)\nwhere ϕ P is learnable fusion. Similarly, from the input F, we follow the same process to obtain the low-rank matrix F l . For the cross-modal fusion, we add these two low-rank matrices and then project back to the original space. We can obtain the fused output O by:\nO = Φ P (I l + F l ),(7)\nwhere Φ P is another learnable MLP.\nOur method can be regarded as a mixer of token exchange (for negative tokens) and token fusion (for uncertain tokens), while retaining the most informative modalityspecific clues (for positive tokens). As the majority of fusion operations occur in the low-rank feature space, our progressive cross-modal shrinkage avoids imposing a significant additional computational burden, while being able to excavate and accumulate the rich cues from each modality for effective modal prompting." }, { "figure_ref": [], "heading": "Inner Finetuning", "publication_ref": [ "b20", "b71" ], "table_ref": [], "text": "In addition to the outer modal prompting, we incorporate the LoRA technique [21] for more efficient finetuning. For each transformer attention module with the weight matrix W 0 ∈ R d×k , we introduce two learnable matrices: B ∈ R d×r and A ∈ R r×k . This leads to the replacement of the frozen attention mechanism h = W 0 x x x with the new LoRA attention:\nh = W 0 x x x + BAx x x.(8)\nTo train our network, we adopt the same loss functions as our baseline tracker [72] for end-to-end learning." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Training Data", "publication_ref": [ "b65", "b35" ], "table_ref": [], "text": "In the absence of a comprehensive multi-modal tracking dataset encompassing all possible combinations (RGB-D-T-E), we have only one RGB+X (i.e. RGB+depth, or RGB+thermal, or RGB+event) at a time. The conventional modality-specific adapted settings are referred as \"X-Specific\", whereas the main target of this paper with one model trained on all modality pairs is called \"Uni-model\" (or a single set of parameters). The training and evaluation settings are summarized as follows:\nModel Trained on Evaluated on X-Specific\nRGB i +X i RGB i +X i Uni-model i RGB i +X i RGB i +X i\nOur RGB-D samples are sourced from DepthTrack [66], a pioneering RGB-D tracking benchmark with 150 training long-term sequences. RGB-T samples are extracted from the extensive LasHeR [36] dataset, featuring 979 diverse training sequences. RGB-E samples are obtained from Vi-sEvent [55], which boasts 500 real-world sequences. Each D/T/E input is transformed into an RGB-like form." }, { "figure_ref": [], "heading": "Within distribution Evaluation", "publication_ref": [ "b65", "b35", "b65", "b90", "b13", "b41", "b64" ], "table_ref": [], "text": "Given that DepthTrack [66], Lasher [36], and VisEvent [55] provide domain-specific testing sequences, our initial evaluation focuses on these within distribution sequences. For each dataset, we adhere to the metrics specified in the original papers and prior standards [69, 91] for evaluation. Comparison on DepthTrack [66]: For evaluation, we use metrics such as precision (Pr) and recall (Re), as well as the F-score, which are the primary metrics. As shown in Tab. 1 When exclusively trained and fine-tuned on Depth-Track, Un-Track achieves a +2.1% absolute precision improvement over the current depth-specific SOTA ViPT [91]. Notably, even when trained with mixed data using a single parameter set, Un-Track outperforms the depth-specific ViPT. Furthermore, when the current ViPT is jointly trained on all datasets with a single set of parameters, its performance significantly deteriorates.\nAdditionally, we observe that trackers excelling in other tracking datasets [14,22,42] might struggle in RGB-D downstream settings. UniNext [65], a leading tracker trained on various large-scale tracking datasets and related image/ video tasks, exhibits poor performance. In contrast, our model achieves cross-modal unification within a single set of parameters, surpassing all depth-specific counterparts and performing closely to our specialized version. This underscores the efficacy of our shared embedding in achieving global alignment across diverse modalities." }, { "figure_ref": [], "heading": "Thermal-Specific Parameters", "publication_ref": [ "b33", "b95", "b80", "b17", "b76", "b90", "b16", "b71", "b7", "b90", "b35", "b35", "b90", "b90" ], "table_ref": [], "text": "Uni-model with a Single Set of Parameters SGT [34] FANet [96] mfDiMP [81] DAFNet [18] MaCNet [77] ProTrack [69] ViPT [91] Un-Track (ours)\nStark [62]\nAiATrack [17] OSTrack [72] SeqTrack [8] ViPT [91] Un Comparison on LasHer [36]: Similarly, we conduct evaluations on the LasHer testing set for RGB-T tracker assessment as shown in Tab. 2. Precision (PR) and success rates (SR) are reported following conventional standards [36,69,91]. Initial comparisons under domain-specific settings reveal the challenges of achieving an architectureunified model across RGB-D and RGB-T domains, with ProTrack [69] and ViPT [91] being the only works consistently leading in both settings. Our Un-Track, following domain-specific finetuning, outperforms the leading ViPT by a significant margin and sets a new SOTA record. In the cross-domain joint learning with a single set of parameters, our uni-model achieves a +3.8% absolute gain over ViPT. Remarkably, our model with a single set of parameters already achieves very competitive performance compared to the thermal-specific ViPT version." }, { "figure_ref": [ "fig_5" ], "heading": "Comparison on VisEvent [55]:", "publication_ref": [ "b2", "b42", "b97" ], "table_ref": [], "text": "We also evaluate tracker performance with RGB-Event input. Event data, being inherently sparse compared to depth and thermal information, presents challenges in extending existing RGB-D or RGB-T fusion designs for effective integration, hence necessitating specific fusion designs [3,43,98]. In contrast, we adopt a unified cross-modal prompting method based on shrinkage fusion. Our approach, with gradual token exchanges between RGB and event modalities, effectively preserves crucial modality-specific clues to enhance feature modeling. Performance-wise, under the event-specific setting, our Un-Track outperforms all counterparts, as shown in Tab. 3 and in Fig. 5. In the single set of parameters setting, our uni-model achieves a +1.1% absolute gain in precision over the current SOTA. This underscores the effectiveness of our approach in handling the unique challenges posed by event data integration, which can be mainly contributed to our shared binding that learns the global RGB+X alignment. " }, { "figure_ref": [], "heading": "Generalization Across Datasets", "publication_ref": [], "table_ref": [], "text": "In this section, we assess the versatility by evaluating performance on datasets that differ from the training ones, aligning with the goal of achieving a universal model checkpoint applicable to diverse scenarios." }, { "figure_ref": [], "heading": "VOT-RGBD2022 [30]:", "publication_ref": [ "b34", "b34", "b34", "b90", "b7", "b71", "b90" ], "table_ref": [], "text": "We first perform inference on the VOT-RGBD2022 dataset. Notably, our uni-model achieves superior performance compared to the depth-specific ViPT with a +0.5% absolute gain in accuracy. RGBT234 [35]: We also test our model on the other thermal dataset RGBT234, which encompasses sequences with different distributions. Our uni-model surpasses the thermal-specific ViPT with a notable +0.7% absolute precision gain, as shown in Tab. 5. Moreover, we present a detailed per-attribute comparison with the fine-tuned ViPT in Fig. 6. We are particularly interested in sequences with motion blur-fast motion-camera moving, as well as sequences with heavy occlusion-scale variants-background clutter. The former motion-related challenges can typically benefit from event clues, renowned for asynchronous computing, while the latter geometry- Table 5. Overall performance on RGBT234 dataset [35]. Our uni-model sets new SOTA records without specific thermal finetuning.\nFigure 6. Per-attribute analysis on the thermal dataset RGBT234 [35]. Challenges related to motion and geometry are generally better addressed by event and depth cameras, respectively. Nevertheless, when inferring only with RGB-T data, our Un-Track surpasses both the SOTA thermal-specific method and the current leading uni-tracker. This success underscores our ability to learn emergent alignment across diverse modalities.\nrelated challenges can typically benefit from depth cameras. However, these event/depth clues are not directly available in the RGB-T setting. Nevertheless, our Un-Track, trained on all modalities, outperforms both thermal fine-tuned ViPT [91] and the current leading uni-tracker [8] with large margins. This underscores our capability in learning event and depth priors through shared binding, without the need for the presence of these modalities during inference. RGB-only: In practical scenarios, challenges arise when there are no modal clues available, a typical case is when the auxiliary sensor fails to work properly. We address this demanding case in our study by substituting the modal input with dummy values. Under a such challenging setting, as shown in Tab. 6, our uni-method consistently outperforms both our RGB baseline fine-tuned counterparts significantly. Notably, such an improvement is achieved with a very limited increase in learning parameters, i.e., +6.65M with +2.14 GFLOPs.\nRGB Baseline [72] ViPT [91] Un-Track (ours) " }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [ "b65", "b90" ], "table_ref": [], "text": "In this section, we perform all experiments on the Depth-Track testing set [66] under a single parameter set setting. Key Component Analysis: We begin by studying the effectiveness of key components, including the shared embedding, modal prompting, and LoRA finetuning, as summarized in Tab. 7. We initially remove the shared embedding by directly feeding mixed modalities into the learning diagram. It can be seen that this equal treatment of all modalities harms network performance due to the heterogeneous representation across modal domains. Secondly, we replace our prompting block with a recent counterpart that computes fovea attention from additional input [91]. Our designed gradual shrinkage fusion, allowing token-wise interaction, proves to be more effective. We also report performance when we remove the inner fine-tuning with Lora.\nIt can be seen that the performance deteriorates.\nLow-Rank: Low-rank approximation plays a vital role in our model, influencing our shared embedding, modal fusion, and LoRA-finetuning. Hence, the choice of rank is crucial for both the efficiency and effectiveness of our ap- The results highlight a substantial performance drop, underscoring the pivotal role of explicit edge guidance -a natural and static embedding that binds all modalities together -in facilitating this implicit learning process. We also conduct experiments using only the explicit edge as the shared embedding, excluding any additional learning modules (w/o Implicit Learning). This approach, too, yields suboptimal performance, primarily due to the neglect of modality-specific clues within each domain.\nFinally, we directly compute the low-rank approximation from mixed modalities, bypassing the initial in-domain approximation and subsequent fusion steps (w/o In-domain Approx.). We observe that the network struggles in learning a shared embedding in this direct low-rank approximation setup, primarily due to the intricately mixed representation caused by the domain gap between different modalities." }, { "figure_ref": [], "heading": "Conclusion and Future Work", "publication_ref": [], "table_ref": [], "text": "We present a successful case of a single-model and anymodality tracker for video object tracking. The proposed method achieves a shared embedding that binds all modalities together, overcoming their heterogeneous representations. This unification is facilitated by lightweight modal prompting and inner finetuning, inheriting benefits from large-scale pre-trained trackers without introducing a substantial computational burden. Exhaustive experiments showcase our improved tracking performance and robust generalization, with any modality input. Ackowledgement The authors thank the reviewers and ACs for their tremendous efforts and helpful comments. The event icon is credited to Zuowen Wang. This research is financed in part by the Alexander von Humboldt Foundation, in part by NSFC (62376156, 62322113), and in part by the Ministry of Education and Science of Bulgaria (support for INSAIT, part of the Bulgarian National Roadmap for Research Infrastructure)." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "The source code is publicly available at https://github.com/Zongwei97/ UnTrack." } ]
In the realm of video object tracking, auxiliary modalities such as depth, thermal, or event data have emerged as valuable assets to complement the RGB trackers. In practice, most existing RGB trackers learn a single set of parameters to use them across datasets and applications. However, a similar single-model unification for multimodality tracking presents several challenges. These challenges stem from the inherent heterogeneity of inputs -each with modality-specific representations, the scarcity of multimodal datasets, and the absence of all the modalities at all times. In this work, we introduce Un-Track, a Unified Tracker of a single set of parameters for any modality. To handle any modality, our method learns their common latent space through low-rank factorization and reconstruction techniques. More importantly, we use only the RGB-X pairs to learn the common latent space. This unique shared representation seamlessly binds all modalities together, enabling effective unification and accommodating any missing modality, all within a single transformer-based architecture. Our Un-Track achieves +8.1 absolute F-score gain, on the DepthTrack dataset, by introducing only +2.14 (over 21.50) GFLOPs with +6.6M (over 93M) parameters, through a simple yet efficient prompting strategy. Extensive comparisons on five benchmark datasets with different modalities show that Un-Track surpasses both SOTA unified trackers and modality-specific counterparts, validating our effectiveness and practicality.
Single-Model and Any-Modality for Video Object Tracking
[ { "figure_caption": "Figure 1 .1Figure 1. Un-Track is a unified tracker with a single parameter set that seamlessly integrates any modality (of RGB-X).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Our proposed framework, termed Un-Track, is composed of a shared embedding, a modal prompting, and a LoRA-finetuned pretrained RGB tracker. The shared embedding learns a joint representation that unifies all modalities (Sec. 3.2). The modal prompting block enhances feature modeling with modal awareness at each scale (Sec. 3.3). To track the target object, we finetune a pretrained foundation model [72] using the LoRA technique (Sec. 3.4). Our model achieves a unified model applicable across different modalities under a single parameter set. During inference, Un-Track seamlessly integrates any image-paired data, thanks to the emergent alignment.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Algorithm 1 Figure 3 .13Figure 3. Shared Embedding. We derive a joint representation through low-rank factorization and reconstruction. Such an implicit learning is additionally integrated with explicit edge awareness to enhance the embedding.", "figure_data": "", "figure_id": "fig_3", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Modal Prompting. For the visual feature I, we employ a score function to categorize tokens into negative, uncertain, and positive segments. Using a token exchange policy, we discard negative tokens, enhance uncertain ones with corresponding tokens from F, and retain positive ones. Then, we transform the feature fusion task into a token recovery problem, addressed by low-rank factorization. Similarly, we extract the most informative low-rank matrix from F to fuse and reconstruct the shared output.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. More precision/success comparisons on VisEvent dataset [55]. \"Uni\" stands for models with a single parameter set. \" E\" stands for the extension of RGB trackers with event fusion.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Overall performance on DepthTrack test set", "figure_data": "Depth-Specific ParametersUni-model with a Single Set of ParametersATCAISDDiMPDeTSPTProTrackViPTUn-TrackStarkAiATrackOSTrackUniNextSeqTrackViPTUn-Track[28][28][66][94][69][91](ours)[62][17][72][65][8][91](ours)F-score(↑)0.4760.4850.5320.5380.5780.5940.6120.3970.5150.5690.4220.5900.5610.610Re(↑)0.4550.4690.5060.5490.5730.5960.6100.4060.5260.5820.4320.6000.5620.610Pr(↑)0.5000.5030.5600.5270.5830.5920.6130.3880.5050.5570.4130.5800.5600.610", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Overall performance on the Lasher thermal testing set[36]. Our thermal-specific model sets a new SOTA record. Our uni-model variant competes strongly with the previous SOTA thermal-specific models and significantly surpasses its unimodel version.", "figure_data": "-Track(ours)", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Overall performance on VisEvent dataset[55]. Our event-specific model sets a new SOTA record. Our uni-variant, with the same parameter set as in Depth and Thermal, consistently achieves competitive performance across various modalities, leading to a significant margin over the uni-variant of the SOTA modality-specific model[91].", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Overall performance on VOT-RGBD2022[30]. Our uni-model, trained on a mix of all modalities, shows robust generalization and outperforms all depth-specific models and other uni-model counterparts.", "figure_data": "Depth-Specific ParametersUni-model with a Single Set of ParametersDRefineStark DDMTrackerDeTSBT DSPTProTrackViPTStarkAiATrackOSTrackSeqTrackUn-Track[29][29][30][66][30][94][69][91][62][17][72][8](ours)EAO(↑)0.5920.6470.6580.6570.7080.6510.6510.7210.4450.6410.6660.6790.718Accuracy(↑)0.7750.8030.7580.7600.8090.7980.8010.8150.7140.7690.8080.8020.820Robustness(↑)0.7600.7980.8510.8450.8640.8510.8020.8710.5980.8320.8140.8460.864Thermal-Specific ParametersUni-model with a Single Set of ParametersmfDiMPSGTDAFNetFANetMaCNetCMPPAPFNetProTrackViPTUn-TrackStarkAiATrackOSTrackSeqTrackUn-Track[81][34][18][96][77][52][59][69][91](ours)[62][17][72][8](ours)MPR(↑)0.6460.7200.7960.7870.7900.8230.8270.7950.8350.8370.6770.7110.7550.8060.842MSR(↑)0.4280.4720.5440.5530.5540.5750.5790.5990.6170.6180.4960.5080.5690.5990.625", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Overall performance on DepthTrack test set[66] with dummy depth input.", "figure_data": "GFLOPs21.5021.8023.64Params (M)92.0892.9698.73F-score(↑)0.5290.5420.558Re(↑)0.5220.5380.557Pr(↑)0.5360.5460.560w/o[91] asw/oShared EmbedModal PromptLoRA FinetuneF-score(↑)0.5990.5790.594Re(↑)0.6020.5750.598Pr(↑)0.5970.5840.596", "figure_id": "tab_4", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Key component analysis.", "figure_data": "", "figure_id": "tab_5", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Low Rank Approximation Our plain version is highlighted in gray . During the prompting fusion, a learnable score function is used to categorize tokens into three groups based on their confidence scores. Here, we explore different percentiles for the number of positive, which is the same as the number of negative tokens, leaving the rest as uncertain tokens. As shown in Tab. 8(d), the choice of percentile can influence overall performance. Lower percentiles result in poorer performance, as recovering uncertain tokens from very few neighboring tokens is challenging. Higher percentiles also lead to performance degradation since the focus is on token exchange rather than token fusion. The choice of 1/4 leads to the best balance between exchanger and fuser, leading to the best performance. Shared Embedding: Here, we perform ablation studies on our shared embedding, a foundational component of our uni-model. The quantitative results are presented in Tab. 9. We begin by exploring a scenario where our shared embedding is replaced with a variant lacking explicit edge guidance (w/o Explicit Edge). In this configuration, the network", "figure_data": "(a) Rank k (Sec. 3.2 )(b) Rank l (Sec. 3.3 )(c) LoRA (Sec. 3.4 )(d) Percentile (Sec. 3.3 )24848162481/81/41/3F-score(↑) 0.607 0.610 0.6020.596 0.610 0.6060.6010.6100.6000.604 0.610 0.595Re(↑)0.606 0.608 0.6010.593 0.608 0.6090.5990.6080.5980.606 0.608 0.593Pr(↑)0.608 0.611 0.6040.599 0.611 0.6040.6020.6110.6020.602 0.611 0.596proach. In Tab. 8, we systematically explore the impact of low-rank choices within each component. Shared Embedding: Our objective is to identify an opti-mal low-rank latent space for merging different modalities effectively. Tab. 8(a) presents our ablation study, where we explore ranks of 2, 4, and 8. Lower ranks result in poorer performance due to sparse representations. Con-versely, higher ranks capture too many modality-specific details, complicating the search for a shared embedding. Modal Prompting: As shown in Tab. 8(b), similar trends are observed when investigating the low-rank choices for modal prompting with rank values of 4, 8, and 16, as low ranks struggle to capture essential information, while higher ranks introduce an overload of modality-specific details. LoRA-finetuning: We also vary the ranks between 2, 4, and 8 for the LoRA finetuning technique, as shown in Tab. 8(c). Lower ranks exhibited consistently poor per-formance, while higher ranks in this case tend to result in poorer performance, likely due to overfitting. Remarks: These experiments emphasize the importance of selecting the best low-rank representations. Nevertheless, our model shows great resilience to the choice of LoRA, showcasing consistent performance across different low-rank configurations. Notably, all our low-rank variants out-perform the current SOTA ViPT under the uni-setting, vali-dating our robustness and effectiveness. Modal Prompting: w/o Explicit Edge F-score(↑) 0.600 Re(↑) 0.602 Pr(↑) 0.598w/o Implicit Learning 0.604 0.609 0.599w/o In-domain Approx. 0.581 0.583 0.579", "figure_id": "tab_6", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Ablation on shared embedding learns the shared embedding solely without any edge prior.", "figure_data": "", "figure_id": "tab_7", "figure_label": "9", "figure_type": "table" } ]
Zongwei Wu; Jilai Zheng; Xiangxuan Ren; Florin-Alexandru Vasluianu; Chao Ma; Danda Pani Paudel; Luc Van Gool; Radu Timofte
[ { "authors": "Simon Arridge; Pascal Fernsel; Andreas Hauptmann", "journal": "Inverse Problems and Imaging", "ref_id": "b0", "title": "Joint reconstruction and low-rank decomposition for dynamic inverse problems", "year": "2022" }, { "authors": "Boyu Chen; Peixia Li; Lei Bai; Lei Qiao; Qiuhong Shen; Bo Li; Weihao Gan; Wei Wu; Wanli Ouyang", "journal": "Springer", "ref_id": "b1", "title": "Backbone is all your need: A simplified architecture for visual object tracking", "year": "2022" }, { "authors": "Qinyu Chen; Zuowen Wang; Shih-Chii Liu; Chang Gao", "journal": "", "ref_id": "b2", "title": "3et: Efficient event-based eye tracking using a change-based convlstm network", "year": "2023" }, { "authors": "Wanli Chen; Xinge Zhu; Ruoqi Sun; Junjun He; Ruiyu Li; Xiaoyong Shen; Bei Yu", "journal": "Springer", "ref_id": "b3", "title": "Tensor low-rank reconstruction for semantic segmentation", "year": "2020" }, { "authors": "Xin Chen; Bin Yan; Jiawen Zhu; Dong Wang; Xiaoyun Yang; Huchuan Lu", "journal": "", "ref_id": "b4", "title": "Transformer tracking", "year": "2021" }, { "authors": "Xin Chen; Bin Yan; Jiawen Zhu; Dong Wang; Xiaoyun Yang; Huchuan Lu", "journal": "", "ref_id": "b5", "title": "Transformer tracking", "year": "2021" }, { "authors": "Xin Chen; Bin Yan; Jiawen Zhu; Huchuan Lu; Xiang Ruan; Dong Wang", "journal": "TPAMI", "ref_id": "b6", "title": "High-performance transformer tracking", "year": "2022" }, { "authors": "Xin Chen; Houwen Peng; Dong Wang; Huchuan Lu; Han Hu", "journal": "", "ref_id": "b7", "title": "Seqtrack: Sequence to sequence learning for visual object tracking", "year": "2023" }, { "authors": "Zedu Chen; Bineng Zhong; Guorong Li; Shengping Zhang; Rongrong Ji", "journal": "", "ref_id": "b8", "title": "Siamese box adaptive network for visual tracking", "year": "2020" }, { "authors": "Anthony Cioppa; Silvio Giancola; Adrien Deliege; Le Kang; Xin Zhou; Zhiyu Cheng; Bernard Ghanem; Marc Van Droogenbroeck", "journal": "", "ref_id": "b9", "title": "Soccernet-tracking: Multiple object tracking dataset and benchmark in soccer videos", "year": "2022" }, { "authors": "Yutao Cui; Cheng Jiang; Limin Wang; Gangshan Wu", "journal": "", "ref_id": "b10", "title": "Mixformer: End-to-end tracking with iterative mixed attention", "year": "2022" }, { "authors": "Martin Danelljan; Goutam Bhat; Fahad Shahbaz Khan; Michael Felsberg", "journal": "", "ref_id": "b11", "title": "ATOM: Accurate tracking by overlap maximization", "year": "2019" }, { "authors": "Martin Danelljan; Luc Van Gool; Radu Timofte", "journal": "", "ref_id": "b12", "title": "Probabilistic regression for visual tracking", "year": "2020" }, { "authors": "Liting Heng Fan; Fan Lin; Peng Yang; Ge Chu; Sijia Deng; Hexin Yu; Yong Bai; Chunyuan Xu; Haibin Liao; Ling", "journal": "", "ref_id": "b13", "title": "LaSOT: A high-quality benchmark for large-scale single object tracking", "year": "2019" }, { "authors": "Yingkai Fu; Meng Li; Wenxi Liu; Yuanchen Wang; Jiqing Zhang; Baocai Yin; Xiaopeng Wei; Xin Yang", "journal": "TIP", "ref_id": "b14", "title": "Distractor-aware event-based tracking", "year": "2023" }, { "authors": "Shang Gao; Jinyu Yang; Zhe Li; Feng Zheng; Aleš Leonardis; Jingkuan Song", "journal": "Springer", "ref_id": "b15", "title": "Learning dual-fused modality-aware representations for rgbd tracking", "year": "2022" }, { "authors": "Shenyuan Gao; Chunluan Zhou; Chao Ma; Xinggang Wang; Junsong Yuan", "journal": "", "ref_id": "b16", "title": "Aiatrack: Attention in attention for transformer visual tracking", "year": "2022" }, { "authors": "Yuan Gao; Chenglong Li; Yabin Zhu; Jin Tang; Tao He; Futian Wang", "journal": "ICCVW", "ref_id": "b17", "title": "Deep adaptive fusion network for high performance RGBT tracking", "year": "2019" }, { "authors": "Rohit Girdhar; Alaaeldin El-Nouby; Zhuang Liu; Mannat Singh; Kalyan Vasudev Alwala; Armand Joulin; Ishan Misra", "journal": "", "ref_id": "b18", "title": "Imagebind: One embedding space to bind them all", "year": "2023" }, { "authors": "Botao He; Haojia Li; Siyuan Wu; Dong Wang; Zhiwei Zhang; Qianli Dong; Chao Xu; Fei Gao", "journal": "", "ref_id": "b19", "title": "Fast-dynamicvision: Detection and tracking dynamic objects with event and depth sensing", "year": "2021" }, { "authors": "J Edward; Yelong Hu; Phillip Shen; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen", "journal": "ICLR", "ref_id": "b20", "title": "Lora: Low-rank adaptation of large language models", "year": "2022" }, { "authors": "Lianghua Huang; Xin Zhao; Kaiqi Huang", "journal": "TPAMI", "ref_id": "b21", "title": "Got-10k: A large high-diversity benchmark for generic object tracking in the wild", "year": "2019" }, { "authors": "Sajid Javed; Martin Danelljan; Fahad Shahbaz Khan; Muhammad Haris Khan; Michael Felsberg; Jiri Matas", "journal": "TPAMI", "ref_id": "b22", "title": "Visual object tracking with discriminative filters and siamese networks: a survey and outlook", "year": "2022" }, { "authors": "I-Hong Jhuo; Dong Liu; Shih-Fu Lee; Chang", "journal": "", "ref_id": "b23", "title": "Robust visual domain adaptation with low-rank reconstruction", "year": "2012" }, { "authors": "Menglin Jia; Luming Tang; Bor-Chun Chen; Claire Cardie; Serge Belongie; Bharath Hariharan; Ser-Nam Lim", "journal": "Springer", "ref_id": "b24", "title": "Visual prompt tuning", "year": "2022" }, { "authors": "Ben Kang; Xin Chen; Dong Wang; Houwen Peng; Huchuan Lu", "journal": "", "ref_id": "b25", "title": "Exploring lightweight hierarchical vision transformers for efficient visual tracking", "year": "2023" }, { "authors": "Muhammad Uzair Khattak; Hanoona Rasheed; Muhammad Maaz; Salman Khan; Fahad Shahbaz Khan", "journal": "", "ref_id": "b26", "title": "Maple: Multi-modal prompt learning", "year": "2023" }, { "authors": "Matej Kristan; Aleš Leonardis; Jiří Matas; Michael Felsberg; Roman Pflugfelder; Joni-Kristian Kämäräinen; Martin Danelljan; Čehovin Luka; Alan Zajc; Ondrej Lukežič; Drbohlav", "journal": "Springer", "ref_id": "b27", "title": "The eighth visual object tracking vot2020 challenge results", "year": "2020" }, { "authors": "Matej Kristan; Jiří Matas; Aleš Leonardis; Michael Felsberg; Roman Pflugfelder; Joni-Kristian Kämäräinen; Jin Hyung; Martin Chang; Luka Danelljan; Alan Cehovin; Lukežič", "journal": "", "ref_id": "b28", "title": "The ninth visual object tracking vot2021 challenge results", "year": "2021" }, { "authors": "Matej Kristan; Aleš Leonardis; Jiří Matas; Michael Felsberg; Roman Pflugfelder; Joni-Kristian Kämäräinen; Jin Hyung; Martin Chang; Danelljan; Čehovin Luka; Alan Zajc; Lukežič", "journal": "Springer", "ref_id": "b29", "title": "The tenth visual object tracking vot2022 challenge results", "year": "2023" }, { "authors": "Yi-Lun Lee; Yi-Hsuan Tsai; Wei-Chen Chiu; Chen-Yu Lee", "journal": "", "ref_id": "b30", "title": "Multimodal prompting with missing modalities for visual recognition", "year": "2023" }, { "authors": "Bo Li; Junjie Yan; Wei Wu; Zheng Zhu; Xiaolin Hu", "journal": "", "ref_id": "b31", "title": "High performance visual tracking with siamese region proposal network", "year": "2018" }, { "authors": "Bo Li; Wei Wu; Qiang Wang; Fangyi Zhang; Junliang Xing; Junjie Yan", "journal": "", "ref_id": "b32", "title": "SiamRPN++: Evolution of siamese visual tracking with very deep networks", "year": "2019" }, { "authors": "Chenglong Li; Nan Zhao; Yijuan Lu; Chengli Zhu; Jin Tang", "journal": "ACMMM", "ref_id": "b33", "title": "Weighted sparse representation regularized graph learning for RGB-T object tracking", "year": "2017" }, { "authors": "Chenglong Li; Xinyan Liang; Yijuan Lu; Nan Zhao; Jin Tang", "journal": "Pattern Recognition", "ref_id": "b34", "title": "RGB-T object tracking: Benchmark and baseline", "year": "2019" }, { "authors": "Chenglong Li; Wanlin Xue; Yaqing Jia; Zhichen Qu; Bin Luo; Jin Tang; Dengdi Sun", "journal": "TIP", "ref_id": "b35", "title": "Lasher: A large-scale highdiversity benchmark for RGBT tracking", "year": "2021" }, { "authors": "Qiao Liu; Xin Li; Zhenyu He; Nana Fan; Di Yuan; Hongpeng Wang", "journal": "TMM", "ref_id": "b36", "title": "Learning deep multi-level similarity for thermal infrared object tracking", "year": "2020" }, { "authors": "Long Cheng; Andong Li; Ai Lu; Zhengzheng Hua Zheng; Jin Tu; Tang", "journal": "ICCVW", "ref_id": "b37", "title": "Multi-adapter rgbt tracking", "year": "2019" }, { "authors": "Jiasen Lu; Christopher Clark; Rowan Zellers; Roozbeh Mottaghi; Aniruddha Kembhavi", "journal": "ICLR", "ref_id": "b38", "title": "Unified-io: A unified model for vision, language, and multi-modal tasks", "year": "2023" }, { "authors": "Alan Lukezic; Ugur Kart; Jani Kapyla; Ahmed Durmush; Joni-Kristian Kamarainen; Jiri Matas; Matej Kristan", "journal": "", "ref_id": "b39", "title": "Cdtb: A color and depth visual object tracking dataset and benchmark", "year": "2019" }, { "authors": "Mengmeng Ma; Jian Ren; Long Zhao; Sergey Tulyakov; Cathy Wu; Xi Peng", "journal": "", "ref_id": "b40", "title": "Smil: Multimodal learning with severely missing modality", "year": "2021" }, { "authors": "Matthias Muller; Adel Bibi; Silvio Giancola; Salman Alsubaihi; Bernard Ghanem", "journal": "", "ref_id": "b41", "title": "TrackingNet: A large-scale dataset and benchmark for object tracking in the wild", "year": "2018" }, { "authors": "Yansong Peng; Yueyi Zhang; Zhiwei Xiong; Xiaoyan Sun; Feng Wu", "journal": "", "ref_id": "b42", "title": "Get: Group event transformer for event-based vision", "year": "2023" }, { "authors": "Yansheng Qiu; Ziyuan Zhao; Hongdou Yao; Delin Chen; Zheng Wang", "journal": "ACM MM", "ref_id": "b43", "title": "Modal-aware visual prompting for incomplete multi-modal brain tumor segmentation", "year": "2023" }, { "authors": "Esteban Real; Jonathon Shlens; Stefano Mazzocchi; Xin Pan; Vincent Vanhoucke", "journal": "", "ref_id": "b44", "title": "Youtube-boundingboxes: A large high-precision human-annotated data set for object detection in video", "year": "2017" }, { "authors": "Yibing Song; Chao Ma; Xiaohe Wu; Lijun Gong; Linchao Bao; Wangmeng Zuo; Chunhua Shen; Rynson Wh Lau; Ming-Hsuan Yang", "journal": "", "ref_id": "b45", "title": "Vital: Visual tracking via adversarial learning", "year": "2018" }, { "authors": "Daniel Stadler; Jurgen Beyerer", "journal": "", "ref_id": "b46", "title": "Improving multiple pedestrian tracking by track management and occlusion handling", "year": "2021" }, { "authors": "Chuanming Tang; Xiao Wang; Ju Huang; Bo Jiang; Lin Zhu; Jianlin Zhang; Yaowei Wang; Yonghong Tian", "journal": "", "ref_id": "b47", "title": "Revisiting color-event based tracking: A unified network, dataset, and metric", "year": "2022" }, { "authors": "Chuanming Tang; Xiao Wang; Ju Huang; Bo Jiang; Lin Zhu; Jianlin Zhang; Yaowei Wang; Yonghong Tian", "journal": "", "ref_id": "b48", "title": "Revisiting color-event based tracking: A unified network, dataset, and metric", "year": "2022" }, { "authors": "Paul Voigtlaender; Jonathon Luiten; H S Philip; Bastian Torr; Leibe", "journal": "", "ref_id": "b49", "title": "Siam R-CNN: Visual tracking by redetection", "year": "2020" }, { "authors": "Zhexiong Wan; Yuxin Mao; Jing Zhang; Yuchao Dai", "journal": "", "ref_id": "b50", "title": "Rpeflow: Multimodal fusion of rgb-pointcloud-event for joint optical flow and scene flow estimation", "year": "2023" }, { "authors": "Chaoqun Wang; Chunyan Xu; Zhen Cui; Ling Zhou; Tong Zhang; Xiaoya Zhang; Jian Yang", "journal": "", "ref_id": "b51", "title": "Cross-modal patternpropagation for RGB-T tracking", "year": "2020" }, { "authors": "Hu Wang; Yuanhong Chen; Congbo Ma; Jodie Avery; Louise Hull; Gustavo Carneiro", "journal": "", "ref_id": "b52", "title": "Multi-modal learning with missing modality via shared-specific feature modelling", "year": "2023" }, { "authors": "Ning Wang; Wengang Zhou; Jie Wang; Houqiang Li", "journal": "", "ref_id": "b53", "title": "Transformer meets tracker: Exploiting temporal context for robust visual tracking", "year": "2021" }, { "authors": "Xiao Wang; Jianing Li; Lin Zhu; Zhipeng Zhang; Zhe Chen; Xin Li; Yaowei Wang; Yonghong Tian; Feng Wu", "journal": "", "ref_id": "b54", "title": "Visevent: Reliable object tracking via collaboration of frame and event flows", "year": "2021" }, { "authors": "Xiao Wang; Xiujun Shu; Shilliang Zhang; Bo Jiang; Yaowei Wang; Yonghong Tian; Feng Wu", "journal": "TMM", "ref_id": "b55", "title": "Mfgnet: Dynamic modality-aware filter generation for rgb-t tracking", "year": "2022" }, { "authors": "Zuowen Wang; Yuhuang Hu; Shih-Chii Liu", "journal": "", "ref_id": "b56", "title": "Exploiting spatial sparsity for event cameras with visual transformers", "year": "2022" }, { "authors": "David Wisth; Marco Camurri; Sandipan Das; Maurice Fallon", "journal": "RA-L", "ref_id": "b57", "title": "Unified multi-modal landmark tracking for tightly coupled lidar-visual-inertial odometry", "year": "2021" }, { "authors": "Yun Xiao; Mengmeng Yang; Chenglong Li; Lei Liu; Jin Tang", "journal": "", "ref_id": "b58", "title": "Attribute-based progressive fusion network for RGBT tracking", "year": "2022" }, { "authors": "Yun Xiao; Mengmeng Yang; Chenglong Li; Lei Liu; Jin Tang", "journal": "", "ref_id": "b59", "title": "Attribute-based progressive fusion network for rgbt tracking", "year": "2022" }, { "authors": "Yinda Xu; Zeyu Wang; Zuoxin Li; Ye Yuan; Gang Yu", "journal": "", "ref_id": "b60", "title": "SiamFC++: Towards robust and accurate visual tracking with target estimation guidelines", "year": "2020" }, { "authors": "Bin Yan; Houwen Peng; Jianlong Fu; Dong Wang; Huchuan Lu", "journal": "", "ref_id": "b61", "title": "Learning spatio-temporal transformer for visual tracking", "year": "2021" }, { "authors": "Bin Yan; Houwen Peng; Jianlong Fu; Dong Wang; Huchuan Lu", "journal": "", "ref_id": "b62", "title": "Learning spatio-temporal transformer for visual tracking", "year": "2021" }, { "authors": "Bin Yan; Houwen Peng; Kan Wu; Dong Wang; Jianlong Fu; Huchuan Lu", "journal": "", "ref_id": "b63", "title": "Lighttrack: Finding lightweight neural networks for object tracking via one-shot architecture search", "year": "2021" }, { "authors": "Bin Yan; Yi Jiang; Jiannan Wu; Dong Wang; Ping Luo; Zehuan Yuan; Huchuan Lu", "journal": "", "ref_id": "b64", "title": "Universal instance perception as object discovery and retrieval", "year": "2023" }, { "authors": "Song Yan; Jinyu Yang; Jani Käpylä; Feng Zheng; Aleš Leonardis; Joni-Kristian Kämäräinen", "journal": "", "ref_id": "b65", "title": "Depthtrack: Unveiling the power of RGBD tracking", "year": "2021" }, { "authors": "Song Yan; Jinyu Yang; Ales Leonardis; Joni-Kristian Kamarainen", "journal": "", "ref_id": "b66", "title": "Depth-only object tracking", "year": "2021" }, { "authors": "Jinyu Yang; Zhe Li; Song Yan; Feng Zheng; Aleš Leonardis; Joni-Kristian Kämäräinen; Ling Shao", "journal": "", "ref_id": "b67", "title": "Rgbd object tracking: An in-depth review", "year": "2022" }, { "authors": "Jinyu Yang; Zhe Li; Feng Zheng; Ales Leonardis; Jingkuan Song", "journal": "ACMMM", "ref_id": "b68", "title": "Prompting for multi-modal tracking", "year": "2007" }, { "authors": "Jinyu Yang; Shang Gao; Zhe Li; Feng Zheng; Aleš Leonardis", "journal": "", "ref_id": "b69", "title": "Resource-efficient rgbd aerial tracking", "year": "2023" }, { "authors": "Rui Yao; Guosheng Lin; Shixiong Xia; Jiaqi Zhao; Yong Zhou", "journal": "ACM TIST", "ref_id": "b70", "title": "Video object segmentation and tracking: A survey", "year": "2020" }, { "authors": "Botao Ye; Hong Chang; Bingpeng Ma; Shiguang Shan; Xilin Chen", "journal": "Springer", "ref_id": "b71", "title": "Joint feature learning and relation modeling for tracking: A one-stream framework", "year": "2022" }, { "authors": "Jie Yin; Ang Li; Tao Li; Wenxian Yu; Danping Zou", "journal": "RA-L", "ref_id": "b72", "title": "M2dgr: A multi-sensor and multi-scenario slam dataset for ground robots", "year": "2021" }, { "authors": "Yuechen Yu; Yilei Xiong; Weilin Huang; Matthew R Scott", "journal": "", "ref_id": "b73", "title": "Deformable siamese attention networks for visual object tracking", "year": "2020" }, { "authors": "Jiandian Zeng; Tianyi Liu; Jiantao Zhou", "journal": "", "ref_id": "b74", "title": "Tag-assisted multimodal sentiment analysis under uncertain missing modalities", "year": "2022" }, { "authors": "Chunhui Zhang; Xin Sun; Yiqian Yang; Li Liu; Qiong Liu; Xi Zhou; Yanfeng Wang", "journal": "ACM MM", "ref_id": "b75", "title": "All in one: Exploring unified vision-language tracking with multi-modal alignment", "year": "2023" }, { "authors": "Hui Zhang; Lei Zhang; Li Zhuo; Jing Zhang", "journal": "Sensors", "ref_id": "b76", "title": "Object tracking in RGB-T videos using modal-aware attention network and competitive learning", "year": "2020" }, { "authors": "Jiqing Zhang; Xin Yang; Yingkai Fu; Xiaopeng Wei; Baocai Yin; Bo Dong", "journal": "", "ref_id": "b77", "title": "Object tracking by jointly exploiting frame and event domain", "year": "2021" }, { "authors": "Jiqing Zhang; Bo Dong; Haiwei Zhang; Jianchuan Ding; Felix Heide; Baocai Yin; Xin Yang", "journal": "", "ref_id": "b78", "title": "Spiking transformers for event-based single object tracking", "year": "2022" }, { "authors": "Jiaming Zhang; Ruiping Liu; Hao Shi; Kailun Yang; Simon Reiß; Kunyu Peng; Haodong Fu; Kaiwei Wang; Rainer Stiefelhagen", "journal": "", "ref_id": "b79", "title": "Delivering arbitrary-modal semantic segmentation", "year": "2023" }, { "authors": "Lichao Zhang; Martin Danelljan; Abel Gonzalez-Garcia; Joost Van De Weijer; Fahad Shahbaz Khan", "journal": "ICCVW", "ref_id": "b80", "title": "Multi-modal fusion for end-to-end RGB-T tracking", "year": "2019" }, { "authors": "Pengyu Zhang; Dong Wang; Huchuan Lu", "journal": "", "ref_id": "b81", "title": "Multi-modal visual tracking: Review and experimental comparison", "year": "2020" }, { "authors": "Pengyu Zhang; Jie Zhao; Dong Wang; Huchuan Lu; Xiang Ruan", "journal": "", "ref_id": "b82", "title": "Visible-thermal uav tracking: A large-scale benchmark and new baseline", "year": "2022" }, { "authors": "Wenwei Zhang; Hui Zhou; Shuyang Sun; Zhe Wang; Jianping Shi; Chen Change Loy", "journal": "", "ref_id": "b83", "title": "Robust multi-modality multi-object tracking", "year": "2019" }, { "authors": "Zhipeng Zhang; Houwen Peng", "journal": "", "ref_id": "b84", "title": "Deeper and wider siamese networks for real-time visual tracking", "year": "2019" }, { "authors": "Haojie Zhao; Dong Wang; Huchuan Lu", "journal": "", "ref_id": "b85", "title": "Representation learning for visual object tracking by masked appearance transfer", "year": "2023" }, { "authors": "Jinjian Zhao; Xiaohan Zhang; Pengyu Zhang", "journal": "", "ref_id": "b86", "title": "A unified approach for tracking uavs in infrared", "year": "2021" }, { "authors": "Shaochuan Zhao; Tianyang Xu; Xiao-Jun Wu; Xue-Feng Zhu", "journal": "PR", "ref_id": "b87", "title": "Adaptive feature fusion for visual object tracking", "year": "2021" }, { "authors": "Aihua Zheng; Zi Wang; Zihan Chen; Chenglong Li; Jin Tang", "journal": "", "ref_id": "b88", "title": "Robust multi-modality person re-identification", "year": "2021" }, { "authors": "Jiawen Zhu; Zhenyu Chen; Zeqi Hao; Shijie Chang; Lu Zhang; Dong Wang; Huchuan Lu; Bin Luo; Jun-Yan He; Jin-Peng Lan", "journal": "", "ref_id": "b89", "title": "Tracking anything in high quality", "year": "2023" }, { "authors": "Jiawen Zhu; Simiao Lai; Xin Chen; Dong Wang; Huchuan Lu", "journal": "", "ref_id": "b90", "title": "Visual prompt multi-modal tracking", "year": "2023" }, { "authors": "Jiawen Zhu; Huayi Tang; Zhi-Qi Cheng; Jun-Yan He; Bin Luo; Shihao Qiu; Shengming Li; Huchuan Lu", "journal": "", "ref_id": "b91", "title": "Dcpt: Darkness clue-prompted tracking in nighttime uavs", "year": "2023" }, { "authors": "Xue-Feng Zhu; Xiao-Jun Wu; Tianyang Xu; Zhen-Hua Feng; Josef Kittler", "journal": "TMM", "ref_id": "b92", "title": "Robust visual object tracking via adaptive attribute-aware discriminative correlation filters", "year": "2021" }, { "authors": "Xue-Feng Zhu; Tianyang Xu; Zhangyong Tang; Zucheng Wu; Haodong Liu; Xiao Yang; Xiao-Jun Wu; Josef Kittler", "journal": "AAAI", "ref_id": "b93", "title": "RGBD1K: A large-scale dataset and benchmark for RGB-D object tracking", "year": "2023" }, { "authors": "Xue-Feng Zhu; Tianyang Xu; Zhangyong Tang; Zucheng Wu; Haodong Liu; Xiao Yang; Xiao-Jun Wu; Josef Kittler", "journal": "", "ref_id": "b94", "title": "Rgbd1k: A large-scale dataset and benchmark for rgb-d object tracking", "year": "2023" }, { "authors": "Yabin Zhu; Chenglong Li; Jin Tang; Bin Luo", "journal": "IEEE Transactions on Intelligent Vehicles", "ref_id": "b95", "title": "Qualityaware feature aggregation network for robust RGBT tracking", "year": "2020" }, { "authors": "Zhiyu Zhu; Junhui Hou; Xianqiang Lyu", "journal": "NeurIPS", "ref_id": "b96", "title": "Learning graphembedded key-event back-tracing for object tracking in event clouds", "year": "2022" }, { "authors": "Zhiyu Zhu; Junhui Hou; Dapeng Oliver Wu", "journal": "", "ref_id": "b97", "title": "Crossmodal orthogonal high-rank augmentation for rgb-event transformer-trackers", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 342.03, 670.23, 203.08, 9.87 ], "formula_id": "formula_0", "formula_text": "D k = σ d (D), T k = σ t (T ), E k = σ e (E),(1)" }, { "formula_coordinates": [ 4, 100.27, 557.74, 186.1, 11.05 ], "formula_id": "formula_1", "formula_text": "M k = ϕ R 1 ([D k , T k , E k ]) + ϕ R 2 (G k ),(2)" }, { "formula_coordinates": [ 4, 132.32, 631.47, 154.04, 9.87 ], "formula_id": "formula_2", "formula_text": "F = Φ R (M k ) + G,(3)" }, { "formula_coordinates": [ 4, 378.41, 581.33, 166.71, 11.21 ], "formula_id": "formula_3", "formula_text": "I l 1 = σ c (m n • F + m p • I).(4)" }, { "formula_coordinates": [ 4, 378.48, 686.51, 97.03, 11.21 ], "formula_id": "formula_4", "formula_text": "I l 2 = σ n (m u • F + m u • I)." }, { "formula_coordinates": [ 5, 134.61, 233.9, 151.75, 11.21 ], "formula_id": "formula_5", "formula_text": "I l = ϕ P ([I l 1 , I l 2 ]),(6)" }, { "formula_coordinates": [ 5, 134.75, 327.72, 151.61, 9.88 ], "formula_id": "formula_6", "formula_text": "O = Φ P (I l + F l ),(7)" }, { "formula_coordinates": [ 5, 136.43, 584.55, 149.94, 10.09 ], "formula_id": "formula_7", "formula_text": "h = W 0 x x x + BAx x x.(8)" }, { "formula_coordinates": [ 5, 314.84, 264.69, 191.19, 22.72 ], "formula_id": "formula_8", "formula_text": "RGB i +X i RGB i +X i Uni-model i RGB i +X i RGB i +X i" }, { "formula_coordinates": [ 6, 346.22, 91.54, 15.3, 16.47 ], "formula_id": "formula_9", "formula_text": "Stark [62]" } ]
10.1145/1015330.1015432
2023-11-27
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b24", "b28", "b11", "b17", "b16", "b9", "b10", "b22", "b18", "b25", "b12", "b9", "b1", "b26" ], "table_ref": [], "text": "Sequential gradient-free (Bayesian) optimization has been the go-to tool for hyperparameter optimization (hereafter hyperopt) since it was introduced to machine learning by Bergstra et al. (2011) and Snoek et al. (2012). The basic algorithmic skeleton is relatively simple: we learn a probabilistic surrogate on the finished trials then we optimize an acquisition function to propose a new arm (hyperparameter value combination). That said, there are numerous bricks and heuristics that afford a large space of algorithmic variants, and choosing the right hyperopt technique and library is challenging for a practicing data scientist.\nHyperparameter optimization is a crucial step in designing well-performing models (Zhang et al., 2021). It is also very expensive since every search step requires the training and scoring a learning algorithm. Comparing hyperopt engines is thus doubly expensive, since we need to run many hyperopt experiments to establish statistical significance between the performance of the engines. Not surprisingly, only a handful of papers attempt such comparison (Falkner et al., 2018;Li et al., 2018;Klein and Hutter, 2019;Cowen-Rivers et al., 2020;Eggensperger et al., 2021), and many times these studies are not independent: they introduce a new technique which is also part of the comparison. It is also common to use tiny UCI data whose relevance to real-world data science is questionable.\nIntegration libraries such as ScikitLearn (Pedregosa et al., 2011), Ray Tune (Liaw et al., 2018), and OpenML (Vanschoren et al., 2013;Feurer et al., 2019) provide unified APIs to machine learning models, data, and hyperopt engines, respectively. Ray Tune is especially useful as a model selection and hyperparameter optimization library that affords a unified interface to many popular hyperparameter optimization algorithms we call engines in this paper. Section 3 is dedicated to a brief enumeration of all the engines participating in this comparison.\nThanks to these awesome tools, hyperopt comparison can be done relatively painlessly. In this paper, we put ourselves into the shoes of a practicing data scientist who is not an expert of hyperopt, and would just like to choose the best engine with default parameters from a library that provides a unified API to many engines. We aim at answering basic questions: by running a hyperopt engine versus random search, i) how much do I gain on the score and ii) how much time do I save? This is an experimental integration study, which means that we do not introduce any new hyperopt technique. That said, we believe that our results point to the right directions for hyperopt researchers to improve the techniques.\nOur basic methodology is to first run grid search on a coarse predefined search grid, essentially pre-computing all the scores that the various engines with various seeds ask for in their attempt to find the optimum. This means that we decouple the expensive traintest-score step from the sequential optimization, affording us space to achieve statistical significance. This choice also means that we restrict the study to a grid search (with limited budget), which favors some of the engines. We have good arguments (Section 2.1) to support this decision, not only because it makes the comparison possible, but also because it is a good practice.\nAnother constraint that comes with the requirement of running many hyperopt experiments is that we need to limit the size of the training data sets. As with the grid constraint, we argue that this does not make the study \"academic\": most of the data in the real world are small. In our experiments (Section 4) we use ten-fold cross validation on 5000 data points, uniformly across data sets and models. This is small enough to run a meaningful set of experiments and big enough to obtain meaningful results. On the other hand, to eliminate test variance, we select data sets that afford us huge test sets, thus precise measurements of the expected scores.\nOne of the problems we need to solve for establishing statistically significant differences is aggregation: we need to be able to average results across metrics, data sets, and models. We introduce two metrics to solve this problem. Rank-based metrics (Section 2.3.1) answer the question: what is the probability that an engine performs better than random search? We design a statistics based on the discounted cumulative gain metrics to answer this question. Score-based metrics (Section 2.3.2) answer the question: how much do we improve the score of random search by using a hyperopt engine? The issue here is the different scale of scores across metrics, data sets, and models, which we solve by sandwiching the scores between those of random search and grid search. Averaging rank-based metrics is more proper, but score-based metrics measure the quantity that we want to optimize in practice.\nOur study is also limited to a purely sequential protocol. While we tend to agree that distributing hyperopt is the best way to accelerate it, adding distributedness to the comparison study raises several methodological questions which are hard to solve. We also do not test advanced features such as pruning and dynamically constructing hyperparameter spaces. These are useful when dealing with complex hyperopt problems, but these are hard to compare systematically, and they fall out of the scope of this paper. We would add tough that even when we use these sophisticated heuristics, search in a fixed space lies at the heart of hyperopt, so knowing where to turn when such a step is needed leads to an overall gain.\nHere is the summary of our findings.\n• Most engines are significantly better than random search, with the best ones accelerating the search two to three times.\n• Out of the eleven engines tested, three stand out: Huawei's HEBO (Cowen-Rivers et al., 2020) that won the 2020 NeurIPS BBO Challenge; Meta's AX (Bakshy et al., 2018); and Microsoft's BlendSearch (Wang et al., 2021).\n• Some engines seem to specialize in hyperopting certain learning algorithms. This makes it tricky to use hyperopt in comparison studies, since the choice of the hyperopt technique may favor some of the models in the comparison." }, { "figure_ref": [], "heading": "The experimental methodology", "publication_ref": [], "table_ref": [], "text": "Most machine learning prediction models f (x; θ) come with a few hyperparameters θ = (θ 1 , . . . , θ D ), typically D ∈ {2, . . . , 10}. Data scientists tune these hyperparameters to a given data set D = (x i , y i ) n i=1 . Each trial ℓ will test a vector of hyperparameter values θ ℓ that we will call arms (following the multi-armed bandit terminology). Each arm θ ℓ will be pulled K = 10 times on K pairs of training/validation sets (D k trn , D k val ) K k=1 drew randomly from the data set D (K-fold randomized cross-validation), resulting in models We assume that each hyperparameter θ j is discretized into a finite number N j of values θ j ∈ {v 1 j , . . . , v N j j }, for all j = 1, . . . , D. In this way, arms θ ℓ are represented by the integer index vector (grid coordinate vector) i ℓ = i ℓ 1 , . . . , i ℓ D ∈ G, where θ ℓ j = v i ℓ j j for all j = 1, . . . , D, and G is the integer grid G = D j=1 {1, . . . , N j } with grid size N = D j=1 N j .\nf ℓ k = A(D k trn , θ ℓ )" }, { "figure_ref": [], "heading": "Why use a finite grid?", "publication_ref": [ "b3" ], "table_ref": [], "text": "The operational reason for using a finite grid in this study is that it lets us pre-compute the validation scores\nT = r i k = R A(D k trn , i), D k val i∈G k∈{1,...,K}\nfor the full grid, letting us rapidly reading out the result when an engine pulls an arm. At the same time, we argue that pre-defining the search grid is also a good practice of the experienced data scientist, for the following reasons.\n1. The grid is always pre-defined by the numerical resolution, and an even smallerresolution grid needs to be used when the acquisition function is optimized on the surrogate model (there are attempts (Bardenet and Kégl, 2010) to improve the inherent grid search of that step). So the decision is not on whether we should use grid or continuous search, but on the resolution of the grid.\n2. Since the task is noisy optimization ( r ℓ k ̸ = r ℓ k ), there is an optimal finite grid resolution: a too coarse grid may lead to missing the optimum, but a too fine grid combined with a perfect optimizer may lead to overfitting (this is the same reason why SGD, a suboptimal optimization technique is the state-of-the-art for training neural nets). In fact, in one of our experiments it happened that, even with a coarse grid, the full grid search lead to a worse test score than a random search on a small subset of the grid.\n3. Data scientists usually know enough to design the grid. They can use priors about length scale (smoothness of r vs. θ) to adapt the grid to the hyperparameter and data size, and inform the engine about the resolution of the search and the possibly nonlinear scale of the hyperparameters. Arguably, on a training set of a thousand points, there is no reason to test a random forest with both 100 and 101 trees.\n4. Discretization may make algorithms simpler and more robust. Gaussian processes (GP) and bandits are easier to design when the input space is discrete. In the case of a GP surrogate, its hyperopt is more robust if the length scale is given by the grid resolution and only the noise parameter needs to be tuned.\nThe only situation when discretization is restrictive is when a hyperparameter has a large range and the objective function is rough: it has a deep and thin well around the optimum.\nIn our experience, such hyperparameters are rare. Dealing with this rare case will require several refining meta-iterations mimicking a line search.\nAs a summary, we acknowledge that the coarse grids used in our experiments may be suboptimal and may also disfavor some of the engines, nevertheless, our setup is robust and informative to the real-life hyperopt practitioner." }, { "figure_ref": [], "heading": "The sequential hyperopt loop", "publication_ref": [], "table_ref": [], "text": "The experimental design algorithm starts with an empty history H 0 = {} and iterates the following three steps for ℓ = 1, . . . , L:\n1. given the history H ℓ-1 , design an arm\nθ ℓ = v i ℓ 1 1 , . . . , v i ℓ D D represented by i ℓ = i ℓ 1 , . . . , i ℓ D ;\n2. call the training and scoring algorithms to obtain r ℓ ; and\n3. add the pair h ℓ = (i ℓ , r ℓ ) to the history\nH ℓ = H ℓ-1 ∪ {h ℓ }.\nThe goal is to find the optimal arm θ * = v\ni * 1 1 , . . . , v i * D D\nrepresented by indices i * = i * 1 , . . . , i * D , and the corresponding optimal predictor f * = A(D, θ * ) with optimal test risk r * = R f * , D test . In our experiments, we use the folds k in two different ways. In our rankbased metrics (Section 2.3.1), we use each fold as a separate single-validation experiment, iterating the hyperopt loop K times and averaging the statistics over the K runs. In the kth experiment, the score is thus\nr ℓ = R A(D k trn , θ ℓ ), D k val .\nIn our score-based metrics (Section 2.3.2), each trial consists in training the models and evaluating the risk on all folds, then averaging the score inside the hyperopt loop:\nr ℓ = 1 K K k=1 R A(D k trn , θ ℓ ), D k val .\nThis is a classical way to use cross-validation inside hyperopt. A third possibility, when the choice of the fold is also delegated to the engine in Step 1 (potentially pulling the same arm multiple times for a more precise estimate of the validation risk) is the subject of a future study.\nWe run all our experiments with three trial budgets:\nL m = m √ N with m = 1 (low)\n, 2 (medium), 3 (high) and grid size N = D j=1 N j . Hyperopt engines usually train (or update) a probabilistic surrogate model on H ℓ-1 in each iteration ℓ, and design i ℓ by optimizing an acquisition function, balancing between exploration and exploitation. This simple skeleton has quite a few nuts and bolts that need to be designed and tuned (what surrogate, how to jump-start the optimization, what acquisition function, how to robustly hyperopt the surrogate model itself, just to mention a few), so the performance of the different engines vary, even if they use the same basic loop. In addition, some engines do not use surrogate models: some successful techniques are based on evolutionary search, space-filling sampling, or local search with restarts. Arguably, in the low-budget regime, the initialization of the search is more important than the surrogate optimization, this latter becoming more useful in the medium and high-budget regimes." }, { "figure_ref": [], "heading": "How we compare engines", "publication_ref": [], "table_ref": [], "text": "We used two tests to compare engines: rank-based and score-based. Rank-based metrics abstract away the score so they easier to aggregate between different metrics, data sets, and models; score-based metrics are closer to what the data scientist is interested in, measuring how much one can improve the score on a fixed budget or reach a certain score with a minimum budget. To aggregate experiments, care needs to be taken to normalize the improvements across metrics, data sets, and models." }, { "figure_ref": [], "heading": "Rank-based metrics", "publication_ref": [], "table_ref": [], "text": "Rank-based metrics answer the question: what is the probability that an engine performs better than random search? The basic gist is first to read out, from the pre-computed full test score table\nT = r i = R A(D trn , i), D test i∈G\n, the ranks ρ = (ρ 1 , . . . , ρ L ) of the test score sequence r 1 , . . . , r L , generated by a given engine (r ℓ is the ρ ℓ th best score in T ). Once generated, we compare ρ to a random draw ρ of L ranks from the integer set N = {1, . . . , N }, where N = |G| = |T | is the grid size. ρ represents the rankings produced by random search with the same budget L. To compare ρ to ρ, we use a statistics s designed according to what we expect from a good hyperopt engine: find good arms as fast as possible. Formally, let s : N L → R be a function that maps N L , a set of L integers from [1, N ], to the real line. For simplicity, without the loss of generality, we assume that s assigns higher values to better rankings. We then define p(better than random) = p s(ρ) > s( ρ) , where ρ ∈ N is a random set of integers drawn without replacement form [1, N ]. For some statistics s, p can be computed analytically, but in our experiments we simply use J = 10 5 random draws { ρ j } J j=1 and estimate p by counting the number of times that s(ρ) beats s( ρ j )\np(better than random) ≈ 1 J J j=1 I s(ρ) > s( ρ j ) .(1)\nWe experimented with various statistics, for example: the time to reach a top 10% arm, or the bottom (best) rank in ρ. We report results using the discounted cumulative gain (DCG), a popular metrics used for scoring ranking algorithms. We use DCG 10% which is a weighted count of top 10% arms present in the L arms generated by the engine. DCG 10% is a \"shaded\" statistics that favors low-rank (good) arms in ρ appearing as early as possible.\nFormally, DCG 10% is defined by\ns DCG 10% (ρ) = L ℓ=1 1 log 2 (ℓ + 1) I ρ ℓ ≤ 0.1N .(2)\nThe advantage of p(better than random) is that it can be averaged over seeds, folds, metrics, data sets, and models. Its disadvantage is that although it correlates with improvement over random search, it is not the same. It is possible that an engine consistently produces better sequences than random search, but the improvement is small." }, { "figure_ref": [], "heading": "Score-based metrics", "publication_ref": [], "table_ref": [], "text": "Score-based metrics answer the question: how much do we improve the score of random search by using a hyperopt engine? First, let us denote the test score of the best arm by r * = r ρ1 , where ρ 1 is the index of the arm with the best validation score: ρ 1 = argmax ℓ=1,...,L r ℓ . 1 Note that, in general, r * is not the best test score r * ̸ = max(r 1 , . . . , r L ) since r ℓ ̸ = r ℓ . The issue in using the numerical value of r * is that its scale depends on the model, the data set, and the metrics. To make this metrics easy to aggregate, we normalize it between r rand , the expected best score of the random search with budget L, and r grid = r i * , the test score of the best arm i * = argmax i∈G r i in the full grid search, obtaining r * = 100 r * -r rand r grid -r rand . r rand can be computed analytically by\nr rand = N ℓ=1 (1 -p) ℓ-1 r ρ ℓ |T | ℓ=1 (1 -p) ℓ-1\n, where p = L/N is the probability of pulling a random arm, and r ρ ℓ is the test score of the ℓth rank statistics of the validation score table T .\nSince we are interested in score improvement, when we aggregate r * i over a set of experiments i, we weight r * i by the maximum possible improvement r grid i -r rand i , so, formally, the improvement degree reported in Section 4 is given by\nr * mean = 100 i r * i -r rand i i r grid i -r rand i .\n(3)" }, { "figure_ref": [], "heading": "The overall score", "publication_ref": [], "table_ref": [], "text": "The baseline of p(better than random)[%] is 50, and the baseline of the improvement degree r * is 0. The maximum of both scores is 100.2 We weight them equally, leading to overall = (p(better than random)[%] -50) + r * /2. (4)" }, { "figure_ref": [], "heading": "Hyperopt engines", "publication_ref": [ "b18", "b1", "b2", "b21", "b11", "b17", "b27" ], "table_ref": [], "text": "Ray Tune (Liaw et al., 2018) is a model selection and hyperparameter optimization library that affords a unified interface to many popular hyperparameter engines. We used all engines \"out of the box\", according to the examples provided in the documentation (except for SigOpt -behind paywall; and Dragonfly -cannot handle integer grid) to avoid \"overfitting\" our set of experiments. We also avoided consulting the authors to remain unbiased.\n1. AX (Bakshy et al., 2018) is a domain-agnostic engine built and used by Meta for a wide variety of sequential optimization tasks (besides hyperopt, for A/B testing, infrastructure optimization, and hardware design). It links to BOTorch (Balandat et al., 2020), a Bayesian optimization library built on GPyTorch (Gardner et al., 2018), a GP library built on PyTorch (Paszke et al., 2017). It is one of the top three engines overall, and especially good in the low-number-of-trials regime, which may be due to the smart space-filling strategy that jump starts the optimization.\n2. BayesOpt (Nogueira, 2014) is a standalone vanilla GP-based Bayesian optimization library. It performed very well on forest-type models, and very badly on SVM and neural nets.\n3. BOHB (Falkner et al., 2018) combines Bayesian optimizaton and Hyperband (Li et al., 2018), which is a bandit-based approach that speeds up random search using adaptive resource allocation and early stopping. In this study, BOHB did not beat random search, possibly due to the default settings, inadequate for our setup.\n4. CFO (Wu et al., 2021) is the first of two engines in Microsoft's FLAML library. It is a local search method with randomized restarts. It is at a disadvantage in this study since its main forte is to manage trials with varying costs, whereas here we measure performance at a given number of trials. " }, { "figure_ref": [], "heading": "BlendSearch", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Results", "publication_ref": [ "b25", "b12", "b7", "b9", "b25", "b12" ], "table_ref": [ "tab_2", "tab_2" ], "text": "We tested all engines on five binary classification algorithms (Table 2) and five data sets (Table 1). All data sets are downloaded from OpenML (Vanschoren et al., 2013;Feurer et al., 2019). They were selected for their size, to be able to precisely estimate the expected score on the test set. Training and validation sizes are uniformly 4500, and 500, respectively. We found this size being the sweet spot between making the study meaningful and being able to run all the experiments3 . In addition, the size of 5000 training points is also quite relevant to a lot of real-world applications, it is used in other horizontal studies on tabular data (Caruana et al., 2004), and it is substantially larger than the data sets used in some other hyperopt studies (Cowen-Rivers et al., 2020). Experiments with 1000 training points (not reported) led to results that were not substantially different. Larger training and validation sets would mean smaller noise of the validation score, but it seems that the engines are not too sensitive to the noise level (relative standard errors are ⪅ 1% on the validation score, ⪅ 0.1% on the test score), so our results will likely hold with larger training samples. All data are binary classification with relatively balanced classes. We used the area under the ROC curve (AUC) as the target metric. Our experiments could and should be repeated on other tasks and metrics, but, similarly to the data size, unless the \"nature\" of the optimization problem (noise, smoothness) is radically different, our result should be generalizable.\nWe used five popular binary classification algorithms, designed specifically for small tabular data (Table 2). We evaluated the rank-based metrics on folds independently, using ten seeds for each engine, ending up with a hundred runs for each (engine, model, data set) triple. We evaluated the score-based metrics on the cross-validated score (since this is what an experienced data scientist would do), using twenty-five seeds per triple.\nTable 3 and Figure 1 show the rankings and the numerical results we obtained. All engines, except for BOHB, are significantly better than random search, although the gap Table 1: Data sets. All data sets are downloaded from OpenML (Vanschoren et al., 2013;Feurer et al., 2019). Training and validation sizes are uniformly 4500, and 500, respectively. With the exception of CoverType, all data are binary classification; CoverType is multi-class, we use the two most populous classes here. AUC * is the best cross-validated test score obtained by full grid search over the five models ( varies between 55% and 80% probability (Table 3, columns 3-5). Three engines seem to perform significantly better than the rest: HEBO, BlendSearch, and AX. Using these engines, we can consistently accelerate random search by two to three times, or, from another angle, cut the difference between full grid search and random search with a given budget by about half (improvement degree = 50). HEBO is especially robust, managing to reach an improvement degree of 50 on all five models. The forte of AX is its performance on extra small budget (m = 1).\nThe search grids we used are relatively coarse (Appendix A), and we found that this may disadvantage some of the engines, towards the bottom of the rankings. In preliminary experiments we found that some of these engines can pick up the difference if a finer grid is used. Nevertheless, the overall best result will not be better than when using a coarser grid, these engines improve only relatively to random search and to themselves with a coarser grid. Since the best engines in our rankings are more robust to grid resolution, even if one can afford a finer grid (see counterarguments in Section 2.1), we suggest that our top three engines be used.\nTable 3: Summary of results. The overall score is the sum of (p(better than random)[%]-50) and 0.5 × improvement degree. The probability that an engine is better than random (1) is based on the DCG 10% statistics (2) computed on the arms pulled by the engine and 10 5 draws of random search (Section 2.3.1). Improvement degree (3) measures the improvement over random search, on a scale of 100, determined by the difference between the score of full grid search and random search (Section 2.3.2). Results are reported with three trial budgets L = m × √ N , with m = 1, 2, 3, where N is the size of the full search grid. Forte is the set of models on which the engine has an improvement degree of 50 at any of the budgets. We found that some engines seem to specialize, for example Nevergrad is strong at optimizing SVM, whereas SKOpt is good at random forests and XGBoost. What this means is that in a comparison study of two algorithms on a data set, the winner may depend on which hyperopt engine is used. In fact, we found that out of the 8250 possible pairwise comparisons (pairs of models, pairs of engines, one of the data sets), about 4.3% inverts the winner model. This may have quite serious consequences in systematic comparison studies, so in such studies we suggest that either full grid search or random search be used. This latter with add noise to the statistical tests but will not bias them.\nEngine overall p(better than random) [%] Improvement degree [0, 100] forte score m = 1 m = 2 m = 3 m = 1 m = 2 m = 3 HEBO 46 59 ± 0 69 ± 0 76 ± 0 33 ± 2 63 ± 2 74 ± 3 RF2, RF3, SVM, XGB, PYTAB BlendSearch 45 62 ± 0 72 ± 0 79 ± 0 24 ± 2 56 ± 3 64 ± 5 RF2, RF3, SVM, XGB AX 44 68 ± 0 74 ± 0 74 ± 0 56 ± 3 50 ± 4 22 ± 6 RF2, RF3, XGB SkOpt 23 57 ± 0 60 ± 1 69 ± 0 4 ± 4 17 ± 5 46 ± 5 RF3, XGB Hyperopt 18 57 ± 0 61 ± 1 69 ± 1 -1 ± 4 6 ± 5 29 ± 6 XGB Optuna 11 57 ± 0 59 ± 1 65 ± 1 -1 ± 5 5 ± 5 -1 ± 8 BayesOpt 6 63 ± 0 64 ± 0 66 ± 0 19 ± 0 -21 ± 2 -49 ± 6 RF2, RF3, XGB Nevergrad 5 57 ± 0 58 ± 1 63 ± 1 -6 ± 4 -3 ± 6 -20 ± 8 SVM BOHB 0 55 ± 0 51 ± 1 49 ± 1 4 ± 4 -6 ± 5 -7 ± 7 CFO -7 54 ± 1 54 ± 1 61 ± 1 -55 ± 9 -32 ± 8 9 ± 7 ZOOpt -43 62 ± 1 59 ± 1 60 ± 1 -5 ± 6 -94 ± 12 -217 ± 20" }, { "figure_ref": [], "heading": "Conclusion and future works", "publication_ref": [], "table_ref": [], "text": "First, we are planning to repeat our methodology for other tasks (e.g., regression) and models. Second, some engines may have better settings for our coarse grid setup, so we are planning to design a protocol in which engine authors can give us a limited number of non-default settings to try. Third, we are planning to explore the effect of increasing the resolution of the grid, using our third protocol, to settle whether setting any grid is solid advice or we should let all our search spaces as high-resolution as possible.\nWe paired our two metrics (rank-based and score-based) and two protocols (crossvalidation and single fold) in two out of the four combinations. While most of the rankings match, there are curious differences which may be due to the metrics but also due to the noise level (which is about three times higher in the single validation case). We are planning to run a brief study to settle this question." }, { "figure_ref": [], "heading": "A.1 Random forests with two hyperparameters", "publication_ref": [], "table_ref": [], "text": "Hyperparameters and grid of values:\n• max leaf nodes = [2,5,10,20,50,100,200,500,1000] • n estimators = [10,20,50,100,200,500,1000] Table 4: Summary of results for random forests with two hyperparameters. The overall score is the sum of (p(better than random)[%] -50) and 0.5 × improvement degree. The probability that an engine is better than random (1) is based on the DCG 10% statistics (2) computed on the arms pulled by the engine and 10 5 draws of random search (Section 2.3.1). Improvement degree (3) measures the improvement over random search, on a scale of 100, determined by the difference between the score of full grid search and random search (Section 2.3.2). Results are reported with three trial budgets L = m × √ N , with m = 1, 2, 3, where N is the size of the full search grid." }, { "figure_ref": [], "heading": "Engine", "publication_ref": [], "table_ref": [], "text": "overall p(better than random) • max leaf nodes = [2,5,10,20,50,100,200,500,1000] • n estimators = [10,20,50,100,200,500,1000] Table 5: Summary of results for random forests with three hyperparameters. The overall score is the sum of (p(better than random)[%] -50) and 0.5 × improvement degree. The probability that an engine is better than random (1) is based on the DCG 10% statistics (2) computed on the arms pulled by the engine and 10 5 draws of random search (Section 2.3.1). Improvement degree (3) measures the improvement over random search, on a scale of 100, determined by the difference between the score of full grid search and random search (Section 2.3.2). Results are reported with three trial budgets L = m × √ N , with m = 1, 2, 3, where N is the size of the full search grid.\n[%] Improvement degree [0, 100] score m = 1 m = 2 m = 3 m = 1 m = 2 m = 3 AX 75 66 ± 1 86 ± 1 86 ± 1 73 ± 4 100 ± 0 100 ± 0 HEBO 49 55 ± 0 61 ± 1 71 ± 1 45 ± 0 86 ± 4 89 ± 2 BlendSearch 48 61 ± 1 68 ± 1 75 ± 1 8 ± 6 81 ± 2 92 ± 3 BayesOpt 34 57 ± 0 66 ± 0 75 ± 1 -37 ± 0 68 ± 1 76 ± 3 SkOpt 15 60 ± 1 50 ± 1 63 ± 1 -3 ± 9 -23 ± 12 70 ± 5 Optuna 11 63 ± 1 54 ± 1 63 ± 1 -3 ± 10 -8 ±" }, { "figure_ref": [], "heading": "Engine", "publication_ref": [], "table_ref": [], "text": "overall p(better than random) • learning rate = [0.1, 0.3, 0.5, 0.7, 1.0]\n[%] Improvement degree [0, 100] score m = 1 m = 2 m = 3 m = 1 m = 2 m = 3 AX 81 82 ± 1 95 ± 0 93 ± 1 81 ± 3 88 ± 3 77 ± 5 BayesOpt 67 85 ± 0 93 ± 0 90 ± 1 74 ± 3 56 ± 1 38 ± 3 HEBO60\n• max depth = [2,3,4,5,7,10,20,50] • n estimators = [10,20,50,100,200,500,1000] Table 6: Summary of results for XGBoost with three hyperparameters. The overall score is the sum of (p(better than random)[%] -50) and 0.5 × improvement degree. The probability that an engine is better than random (1) is based on the DCG 10% statistics (2) computed on the arms pulled by the engine and 10 5 draws of random search (Section 2.3.1). Improvement degree (3) measures the improvement over random search, on a scale of 100, determined by the difference between the score of full grid search and random search (Section 2.3.2). Results are reported with three trial budgets L = m × √ N , with m = 1, 2, 3, where N is the size of the full search grid. " }, { "figure_ref": [], "heading": "A.4 Support vector machines with two hyperparameters", "publication_ref": [], "table_ref": [], "text": "Hyperparameters and grid of values: 03125, 0.125, 0.5, 2.0, 8.0, 32.0, 128.0, 512.0, 2.0480e+03, 8.1920e+03, 3.2768e+04] • gamma = [3.0518e-05, 0.0001221, 0.0004883, 0.001953, 0.007812, 0.03125, 0.125, 0.5, 2.0, 8.0] Table 7: Summary of results for support vector machines with two hyperparameters. The overall score is the sum of (p(better than random)[%] -50) and 0.5 × improvement degree. The probability that an engine is better than random (1) is based on the DCG 10% statistics (2) computed on the arms pulled by the engine and 10 5 draws of random search (Section 2.3.1). Improvement degree (3) measures the improvement over random search, on a scale of 100, determined by the difference between the score of full grid search and random search (Section 2.3.2). Results are reported with three trial budgets L = m × √ N , with m = 1, 2, 3, where N is the size of the full search grid. " }, { "figure_ref": [], "heading": "A.5 Pytab with four hyperparameters", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Hyperparameters and grid of values:", "publication_ref": [], "table_ref": [], "text": "• layers = [512 -256, 1024 -512, 1024 -512 -512]\n• learning rate = [0.0001, 0.001, 0.01]\n• n batches = [1, 10, 20, 50, 100]\n• n epochs = [50, 100, 200] Table 8: Summary of results for pytab with four hyperparameters. The overall score is the sum of (p(better than random)[%] -50) and 0.5 × improvement degree. The probability that an engine is better than random ( 1) is based on the DCG 10% statistics (2) computed on the arms pulled by the engine and 10 5 draws of random search (Section 2.3.1). Improvement degree (3) measures the improvement over random search, on a scale of 100, determined by the difference between the score of full grid search and random search (Section 2.3.2). Results are reported with three trial budgets L = m × √ N , with m = 1, 2, 3, where N is the size of the full search grid. " } ]
We run an independent comparison of all hyperparameter optimization (hyperopt) engines available in the Ray Tune library. We introduce two ways to normalize and aggregate statistics across data sets and models, one rank-based, and another one sandwiching the score between the random search score and the full grid search score. This affords us i) to rank the hyperopt engines, ii) to make generalized and statistically significant statements on how much they improve over random search, and iii) to make recommendations on which engine should be used to hyperopt a given learning algorithm. We find that most engines beat random search, but that only three of them (HEBO, AX, and BlendSearch) clearly stand out. We also found that some engines seem to specialize in hyperopting certain learning algorithms, which makes it tricky to use hyperopt in comparison studies, since the choice of the hyperopt technique may favor some of the models in the comparison.
A systematic study comparing hyperparameter optimization engines on tabular data
[ { "figure_caption": "Figure 1 :1Figure 1: Summary of results. The probability that an engine is better than random (1) is based on the DCG 10% statistics (2) computed on the arms pulled by the engine and 10 5 draws of random search (Section 2.3.1). Improvement degree (3) measures the improvement over random search, on a scale of 100, determined by the difference between the score of full grid search and random search (Section 2.3.2). Results are reported with three trial budgets L = m × √ N , with m = 1, 2, 3, where N is the size of the full search grid.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Summary of results for random forests with two hyperparameters. The probability that an engine is better than random (1) is based on the DCG 10% statistics (2) computed on the arms pulled by the engine and 10 5 draws of random search (Section 2.3.1). Improvement degree (3) measures the improvement over random search, on a scale of 100, determined by the difference between the score of full grid search and random search (Section 2.3.2). Results are reported with three trial budgets L = m × √ N , with m = 1, 2, 3, where N is the size of the full search grid.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Summary of results for random forests with three hyperparameters. The probability that an engine is better than random (1) is based on the DCG 10% statistics (2) computed on the arms pulled by the engine and 10 5 draws of random search (Section 2.3.1). Improvement degree (3) measures the improvement over random search, on a scale of 100, determined by the difference between the score of full grid search and random search (Section 2.3.2). Results are reported with three trial budgets L = m × √ N , with m = 1, 2, 3, where N is the size of the full search grid.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "-124 ± 21 -214 ± 32", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Summary of results for XGBoost with three hyperparameters. The probability that an engine is better than random (1) is based on the DCG 10% statistics (2) computed on the arms pulled by the engine and 10 5 draws of random search (Section 2.3.1). Improvement degree (3) measures the improvement over random search, on a scale of 100, determined by the difference between the score of full grid search and random search (Section 2.3.2). Results are reported with three trial budgets L = m × √ N , with m = 1, 2, 3, where N is the size of the full search grid.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Nevergrad(Rapin and Teytaud, 2018) is a gradient-free optimization engine from Meta. As suggested by the Ray Tune documentation, we are using the 1+1 engine, which is perhaps not the best choice for our regime. ZOOpt(Liu et al., 2018) is zeroth-order (derivative-free) optimization engine. It performs well on our ranking-based statistics but not on the score-based metrics. Its main issue seems to be that it uses only a small fraction of the trial budget: once it thinks it found the optimum, it stops exploring.", "figure_data": "9. Optuna (Akiba et al., 2019) is a new-generation hyperopt library providing featuressuch as pruning and dynamically constructed search spaces. Here we use its basicBayesian optimization core which barely beats random search.10. SkOpt is an open-source community-developed Bayesian optimization library. Itperforms better than random search but overall does not reach the performance of thetop engines.11.cost-sensitive optimization, and it uses no surrogate model, yet it is one of our top threeengines overall.6. HEBO (Cowen-Rivers et al., 2020) is Huawei's engine that won the 2020 NeurIPS BBOChallenge. It adds sophisticated processing steps to the classical BO framework, suchas output warping, multi-objective acquisitions, non-stationary and heteroscedasticmodels, and input warping. It is one of the top three engines overall, and especiallygood with moderate and higher number of trials.7. Hyperopt (Bergstra et al., 2013) is one of the oldest hyperopt libraries, implementing", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Table 2). Binary classification models. RF2: ScikitLearn (Pedregosa et al., 2011) random forests with two hyperparameters; RF3: ScikitLearn (Pedregosa et al., 2011) random forests with three hyperparameters; XGB: XGBoost (Chen and Guestrin, 2016), SVM: ScikitLearn (Pedregosa et al., 2011) support vector machines; PYTAB: PyTorch Tabular(Joseph, 2021). D is the number of hyperparameters, N is the size of the search grid, L is the minimum trial budget (we used L, 2L, 3L budgets), and AUC is the mean of the best test scores over our experiments. More information and model-wise results are in Appendix A.", "figure_data": "Data setn trn n val n test% majority AUC *BNGCreditG 4500 500 40222750.9198Adult4500 500 995000 700.8397CoverType4500 500 490141 570.9103Higgs4500 500 93049530.7735Jannis4500 500 62312570.8419ModelD NL =√N AUCRF22 9 × 7 = 6380.8496RF33 4 × 9 × 7 = 252160.8503XGB3 5 × 9 × 7 = 315180.8553SVM2 11 × 10 = 110100.822PYTAB 4 3 × 3 × 5 × 3 = 135 120.8248", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" } ]
Balázs Kégl
[ { "authors": "Takuya Akiba; Shotaro Sano; Toshihiko Yanase; Takeru Ohta; Masanori Koyama", "journal": "", "ref_id": "b0", "title": "Optuna: A next-generation hyperparameter optimization framework", "year": "2019" }, { "authors": "Eytan Bakshy; Lili Dworkin; Brian Karrer; Konstantin Kashin; Benjamin Letham; Ashwin Murthy; Shaun Singh", "journal": "", "ref_id": "b1", "title": "AE: A domain-agnostic platform for adaptive experimentation", "year": "2018" }, { "authors": "Maximilian Balandat; Brian Karrer; Daniel Jiang; Samuel Daulton; Ben Letham; Andrew G Wilson; Eytan Bakshy", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b2", "title": "BoTorch: a framework for efficient Monte-Carlo Bayesian optimization", "year": "2020" }, { "authors": "Rémi Bardenet; Balázs Kégl", "journal": "", "ref_id": "b3", "title": "Surrogating the surrogate: accelerating gaussian-processbased global optimization with a mixture cross-entropy algorithm", "year": "2010" }, { "authors": "James Bergstra; Rémi Bardenet; Yoshua Bengio; Balázs Kégl", "journal": "", "ref_id": "b4", "title": "Algorithms for hyperparameter optimization", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b5", "title": "", "year": "2011" }, { "authors": "James Bergstra; Daniel Yamins; David Cox", "journal": "", "ref_id": "b6", "title": "Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures", "year": "2013-06" }, { "authors": "Rich Caruana; Alexandru Niculescu-Mizil; Geoff Crew; Alex Ksikes", "journal": "ACM", "ref_id": "b7", "title": "Ensemble selection from libraries of models", "year": "2004" }, { "authors": "Tianqi Chen; Carlos Guestrin", "journal": "ACM", "ref_id": "b8", "title": "XGBoost: A scalable tree boosting system", "year": "2016" }, { "authors": "Wenlong Alexander I Cowen-Rivers; Rasul Lyu; Zhi Tutunov; Antoine Wang; Ryan Rhys Grosnit; Hao Griffiths; Jun Jianye; Haitham Wang; Ammar Bou", "journal": "", "ref_id": "b9", "title": "An empirical study of assumptions in Bayesian optimisation", "year": "2020" }, { "authors": "Katharina Eggensperger; Philipp Müller; Neeratyoy Mallik; Matthias Feurer; Rene Sass; Aaron Klein; Noor Awad; Marius Lindauer; Frank Hutter", "journal": "", "ref_id": "b10", "title": "HPOBench: A collection of reproducible multi-fidelity benchmark problems for HPO", "year": "2021" }, { "authors": "Stefan Falkner; Aaron Klein; Frank Hutter", "journal": "PMLR", "ref_id": "b11", "title": "BOHB: Robust and efficient hyperparameter optimization at scale", "year": "2018-07-15" }, { "authors": "Matthias Feurer; Jan N Van Rijn; Arlind Kadra; Pieter Gijsbers; Neeratyoy Mallik; Sahithya Ravi; Andreas Mueller; Joaquin Vanschoren; Frank Hutter", "journal": "", "ref_id": "b12", "title": "OpenML-Python: an extensible Python API for OpenML", "year": "2019" }, { "authors": "Jacob Gardner; Geoff Pleiss; Q Kilian; David Weinberger; Andrew G Bindel; Wilson", "journal": "", "ref_id": "b13", "title": "GPyTorch: Blackbox matrix-matrix Gaussian process inference with GPU acceleration", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b14", "title": "", "year": "2018" }, { "authors": "Manu Joseph", "journal": "", "ref_id": "b15", "title": "PyTorch Tabular: A framework for deep learning with tabular data", "year": "2021" }, { "authors": "Aaron Klein; Frank Hutter", "journal": "", "ref_id": "b16", "title": "Tabular benchmarks for joint architecture and hyperparameter optimization", "year": "2019" }, { "authors": "Liam Li; Kevin Jamieson; Giulia Desalvo; Afshin Rostamizadeh; Ameet Talwalkar", "journal": "Journal of Machine Learning Research", "ref_id": "b17", "title": "Hyperband: A novel bandit-based approach to hyperparameter optimization", "year": "2018" }, { "authors": "Richard Liaw; Eric Liang; Robert Nishihara; Philipp Moritz; Joseph E Gonzalez; Ion Stoica", "journal": "", "ref_id": "b18", "title": "Tune: A research platform for distributed model selection and training", "year": "2018" }, { "authors": "Yu-Ren Liu; Yi-Qi Hu; Hong Qian; Yang Yu; Chao Qian", "journal": "", "ref_id": "b19", "title": "ZOOpt: Toolbox for derivative-free optimization", "year": "2018" }, { "authors": "Fernando Nogueira", "journal": "", "ref_id": "b20", "title": "Bayesian Optimization: Open source constrained global optimization tool for Python", "year": "2014" }, { "authors": "Adam Paszke; Sam Gross; Soumith Chintala; Gregory Chanan; Edward Yang; Zachary Devito; Zeming Lin; Alban Desmaison; Luca Antiga; Adam Lerer", "journal": "", "ref_id": "b21", "title": "Automatic differentiation in PyTorch", "year": "2017" }, { "authors": "Fabian Pedregosa; Gaël Varoquaux; Alexandre Gramfort; Vincent Michel; Bertrand Thirion; Olivier Grisel; Mathieu Blondel; Peter Prettenhofer; Ron Weiss; Vincent Dubourg", "journal": "Journal of machine learning research", "ref_id": "b22", "title": "Scikit-learn: Machine learning in Python", "year": "2011-10" }, { "authors": "J Rapin; O Teytaud", "journal": "", "ref_id": "b23", "title": "Nevergrad -A gradient-free optimization platform", "year": "2018" }, { "authors": "Jasper Snoek; Hugo Larochelle; Ryan P Adams", "journal": "Curran Associates, Inc", "ref_id": "b24", "title": "Practical Bayesian optimization of machine learning algorithms", "year": "2012" }, { "authors": "Joaquin Vanschoren; Jan N Van Rijn; Bernd Bischl; Luis Torgo", "journal": "SIGKDD Explorations", "ref_id": "b25", "title": "OpenML: Networked science in machine learning", "year": "2013" }, { "authors": "Chi Wang; Qingyun Wu; Silu Huang; Amin Saied", "journal": "", "ref_id": "b26", "title": "Economical hyperparameter optimization with blended search strategy", "year": "2021" }, { "authors": "Qingyun Wu; Chi Wang; Silu Huang", "journal": "", "ref_id": "b27", "title": "Frugal optimization for cost-related hyperparameters", "year": "2021" }, { "authors": "Baohe Zhang; Raghu Rajan; Luis Pineda; Nathan Lambert; André Biedenkapp; Kurtland Chua; Frank Hutter; Roberto Calandra", "journal": "PMLR", "ref_id": "b28", "title": "On the importance of hyperparameter optimization for model-based reinforcement learning", "year": "2021-04-15" }, { "authors": "A ", "journal": "", "ref_id": "b29", "title": "Results for each model", "year": "" } ]
[ { "formula_coordinates": [ 3, 90, 311.44, 79.07, 14.73 ], "formula_id": "formula_0", "formula_text": "f ℓ k = A(D k trn , θ ℓ )" }, { "formula_coordinates": [ 3, 193.47, 467.72, 196.05, 23.99 ], "formula_id": "formula_1", "formula_text": "T = r i k = R A(D k trn , i), D k val i∈G k∈{1,...,K}" }, { "formula_coordinates": [ 4, 122.27, 398.68, 399.73, 31.69 ], "formula_id": "formula_2", "formula_text": "θ ℓ = v i ℓ 1 1 , . . . , v i ℓ D D represented by i ℓ = i ℓ 1 , . . . , i ℓ D ;" }, { "formula_coordinates": [ 4, 307.92, 462.67, 88.22, 11.52 ], "formula_id": "formula_3", "formula_text": "H ℓ = H ℓ-1 ∪ {h ℓ }." }, { "formula_coordinates": [ 4, 328.64, 484.19, 46.86, 18.34 ], "formula_id": "formula_4", "formula_text": "i * 1 1 , . . . , v i * D D" }, { "formula_coordinates": [ 4, 258.85, 555.43, 125.43, 14.73 ], "formula_id": "formula_5", "formula_text": "r ℓ = R A(D k trn , θ ℓ ), D k val ." }, { "formula_coordinates": [ 4, 332.99, 581.92, 163.13, 15.78 ], "formula_id": "formula_6", "formula_text": "r ℓ = 1 K K k=1 R A(D k trn , θ ℓ ), D k val ." }, { "formula_coordinates": [ 4, 90, 657.18, 143.26, 19.87 ], "formula_id": "formula_7", "formula_text": "L m = m √ N with m = 1 (low)" }, { "formula_coordinates": [ 5, 186.98, 374.41, 169.21, 18.04 ], "formula_id": "formula_8", "formula_text": "T = r i = R A(D trn , i), D test i∈G" }, { "formula_coordinates": [ 5, 189.5, 598.04, 333.77, 33.71 ], "formula_id": "formula_9", "formula_text": "p(better than random) ≈ 1 J J j=1 I s(ρ) > s( ρ j ) .(1)" }, { "formula_coordinates": [ 6, 200.7, 118.42, 322.58, 33.98 ], "formula_id": "formula_10", "formula_text": "s DCG 10% (ρ) = L ℓ=1 1 log 2 (ℓ + 1) I ρ ℓ ≤ 0.1N .(2)" }, { "formula_coordinates": [ 6, 240.94, 436.47, 124.9, 33.55 ], "formula_id": "formula_11", "formula_text": "r rand = N ℓ=1 (1 -p) ℓ-1 r ρ ℓ |T | ℓ=1 (1 -p) ℓ-1" }, { "formula_coordinates": [ 6, 233.73, 561.91, 144.54, 31.92 ], "formula_id": "formula_12", "formula_text": "r * mean = 100 i r * i -r rand i i r grid i -r rand i ." }, { "formula_coordinates": [ 10, 95.98, 214.33, 464.52, 126.27 ], "formula_id": "formula_13", "formula_text": "Engine overall p(better than random) [%] Improvement degree [0, 100] forte score m = 1 m = 2 m = 3 m = 1 m = 2 m = 3 HEBO 46 59 ± 0 69 ± 0 76 ± 0 33 ± 2 63 ± 2 74 ± 3 RF2, RF3, SVM, XGB, PYTAB BlendSearch 45 62 ± 0 72 ± 0 79 ± 0 24 ± 2 56 ± 3 64 ± 5 RF2, RF3, SVM, XGB AX 44 68 ± 0 74 ± 0 74 ± 0 56 ± 3 50 ± 4 22 ± 6 RF2, RF3, XGB SkOpt 23 57 ± 0 60 ± 1 69 ± 0 4 ± 4 17 ± 5 46 ± 5 RF3, XGB Hyperopt 18 57 ± 0 61 ± 1 69 ± 1 -1 ± 4 6 ± 5 29 ± 6 XGB Optuna 11 57 ± 0 59 ± 1 65 ± 1 -1 ± 5 5 ± 5 -1 ± 8 BayesOpt 6 63 ± 0 64 ± 0 66 ± 0 19 ± 0 -21 ± 2 -49 ± 6 RF2, RF3, XGB Nevergrad 5 57 ± 0 58 ± 1 63 ± 1 -6 ± 4 -3 ± 6 -20 ± 8 SVM BOHB 0 55 ± 0 51 ± 1 49 ± 1 4 ± 4 -6 ± 5 -7 ± 7 CFO -7 54 ± 1 54 ± 1 61 ± 1 -55 ± 9 -32 ± 8 9 ± 7 ZOOpt -43 62 ± 1 59 ± 1 60 ± 1 -5 ± 6 -94 ± 12 -217 ± 20" }, { "formula_coordinates": [ 14, 124.06, 313.64, 359.96, 90.34 ], "formula_id": "formula_14", "formula_text": "[%] Improvement degree [0, 100] score m = 1 m = 2 m = 3 m = 1 m = 2 m = 3 AX 75 66 ± 1 86 ± 1 86 ± 1 73 ± 4 100 ± 0 100 ± 0 HEBO 49 55 ± 0 61 ± 1 71 ± 1 45 ± 0 86 ± 4 89 ± 2 BlendSearch 48 61 ± 1 68 ± 1 75 ± 1 8 ± 6 81 ± 2 92 ± 3 BayesOpt 34 57 ± 0 66 ± 0 75 ± 1 -37 ± 0 68 ± 1 76 ± 3 SkOpt 15 60 ± 1 50 ± 1 63 ± 1 -3 ± 9 -23 ± 12 70 ± 5 Optuna 11 63 ± 1 54 ± 1 63 ± 1 -3 ± 10 -8 ±" }, { "formula_coordinates": [ 15, 121.27, 329.63, 365.55, 57.46 ], "formula_id": "formula_15", "formula_text": "[%] Improvement degree [0, 100] score m = 1 m = 2 m = 3 m = 1 m = 2 m = 3 AX 81 82 ± 1 95 ± 0 93 ± 1 81 ± 3 88 ± 3 77 ± 5 BayesOpt 67 85 ± 0 93 ± 0 90 ± 1 74 ± 3 56 ± 1 38 ± 3 HEBO60" } ]
2024-03-30
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b3", "b59", "b78", "b60", "b78", "b56", "b60", "b78", "b74", "b21", "b3", "b59", "b78", "b23", "b34", "b40" ], "table_ref": [], "text": "With the growing popularity of 3D and virtual reality applications, there has been increasing interest in creating realistic 3D human models. In general, crafting 3D humans is labor-intensive, time-consuming, and requires collaboration from highly skilled professionals. To bring lifelike 3D humans to reality and to support both expert and amateur creators in this task, it is essential to enable users to create textured 3D humans from simple 2D images or photos.\nReconstructing a fully textured human mesh from a single-view image presents an ill-posed problem with two major challenges. Firstly, the appearance information required for generating texture in unobserved regions is missing. Secondly, 3D information for mesh reconstruction, such as depth, surface, and body pose, becomes ambiguous in a 2D image. Previous efforts [4,60,79] attempted to tackle these challenges in a data-driven manner, focusing on training neural networks with image-mesh pairs. However, these approaches struggle with images featuring unseen appearances or poses, due to limited 3D human training data. More recent studies [61,73,79] introduced additional 3D reasoning modules to enhance robustness against unseen poses. Yet, generating realistic and full-body textures from unseen appearances still remains an unsolved problem.\nTo address the above challenges, we propose SiTH, a novel pipeline that integrates an image-conditioned diffusion model to reconstruct lifelike 3D textured humans from monocular images. At the core of our approach is the decomposition of the challenging single-view problem into two subproblems: generative back-view hallucination and mesh reconstruction. This decomposition enables us to exploit the generative capability of pretrained diffusion models to guide full-body mesh and texture reconstruction. The workflow is depicted in Fig. 1. Given a front-view image, the first stage involves hallucinating a perceptually consistent back-view image using image-conditioned diffusion. The second stage reconstructs full-body mesh and texture, utilizing both the front and back-view images as guidance.\nMore specifically, we employ the generative capabilities of pretrained diffusion models (e.g. Stable Diffusion [57]) to infer unobserved back-view appearances for full-body 3D reconstruction. The primary challenge in ensuring the realism of 3D meshes lies in generating images that depict spatially aligned body shapes and perceptually consistent appearances with the input images. While diffusion models demonstrate impressive generative abilities with text conditioning, they are limited in producing desired back-view images using the frontal images as image conditions. To overcome this, we adapt the network architecture to enable conditioning on frontal images and introduce additional trainable components following ControlNet [77] to provide pose and mask control. To fully tailor this model to our task while retaining its original generative power, we carefully fine-tune the diffusion model using multi-view images rendered from 3D human scans. Complementing this generative model, we develop a mesh reconstruction module to recover full-body textured mesh from front and back-view images. We follow prior work in handling 3D ambiguity through normal [61] and skinned body [73,79] guidance. It is worth noting that the models for both subproblems are trained using the same public THuman2.0 [75] dataset, which consists of as few as 500 scans.\nTo advance research in single-view human reconstruction, we created a new benchmark based on the high-quality CustomHumans [22] dataset and conducted comprehensive evaluations against state-of-the-art methods. Compared to existing end-to-end methods [4,60,79], our two-stage pipeline can recover full-body textured meshes, including back-view details, and demonstrates robustness to unseen images. In contrast to time-intensive diffusion-based optimization methods [24,35,41], our pipeline efficiently produces high-quality textured meshes in under two minutes. Moreover, we explored applications combining text-guided diffusion models, showing SiTH's versatility in 3D human creation. Our contributions are summarized as follows:\n• We introduce SiTH, a single-view human reconstruction pipeline capable of producing high-quality, fully textured 3D human meshes within two minutes. • Through decomposing the single-view reconstruction task, SiTH can be efficiently trained with public 3D human scans and is more robust to unseen images. • We establish a new benchmark featuring more diverse subjects for evaluating textured human reconstruction." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b13", "b25", "b27", "b28", "b61", "b66", "b70", "b59", "b60", "b78", "b41", "b50", "b24", "b20", "b3", "b11", "b32", "b73", "b17", "b51", "b55", "b56", "b58", "b18", "b39", "b46", "b52", "b67", "b47", "b62", "b65", "b52", "b2", "b9", "b23", "b26", "b63", "b75", "b40", "b53", "b34", "b7", "b12", "b35", "b51", "b55", "b56", "b58", "b22", "b36", "b57", "b30", "b15", "b40" ], "table_ref": [], "text": "Single-view human mesh reconstruction. Reconstructing 3D humans from monocular inputs [14,17,26,28,29,62,67,71] has gained more popularity in research.\nIn this context, we focus on methods that recover 3D human shapes, garments, and textures from a single image. As a seminal work, Saito et al. [60] first proposed a data-driven method with pixel-aligned features and neural fields [72]. Its follow-up work PIFuHD [61] further improved this framework with high-res normal guidance. Later approaches extended this framework with additional human body priors. For instance, PaMIR [79] and ICON [73] utilized skinned body models [42,51] to guide 3D reconstruction. ARCH [25], ARCH++ [21], and CAR [39] transformed global coordinates into the canonical coordinates to allow for reposing. PHOHRUM [4] and S3F [12] further disentangled shading and albedo to enable relighting. Another line of work replaced the neural representations with conventional Poisson surface reconstruction [33,34]. ECON [74] and 2K2K [18] trained normal and depth predictors to generate front and back 2.5D point clouds. The human mesh is obtained by fusing these point clouds with body priors and 3D heuristics. However, none of these methods produce realistic full-body texture and geometry in the unobserved regions. Our pipeline addresses this problem by incorporating a generative diffusion model into the 3D human reconstruction workflow. 3D generation with 2D diffusion models. Diffusion models [52,56,57,59] trained with large collections of images have demonstrated unprecedented capability in creating 3D objects from text prompts. Most prior work [11,19,40,47,53,68] followed an optimization workflow to update 3D representations (e.g. NeRF [48], SDF tetrahedron [63]) via neural rendering [66] and a score distillation sampling (SDS) [53] loss. While some methods [3,10,24,27,64,76] applied this workflow to human bodies, they cannot produce accurate human bodies and appearances due to the ambiguity of text-conditioning. More recent work [41,54] also tried to extend this workflow with more accurate imageconditioning. However, we show that they struggle to recover human clothing details and require a long optimization time. Most related to our work is Chupa [35], which also decomposes its pipeline into two stages. Note that Chupa is an optimization-based approach that relies on texts and cannot model colors. We address these issues by introducing an image-conditioning strategy and model. Most importantly, our method swiftly reconstructs full-texture human meshes without any optimization process. Diffusion models adaptation. Foundation models [8,13,20,36] trained on large-scale datasets have been shown to be adaptable to various downstream tasks. Following this trend, pretrained diffusion models [52,56,57,59] have become common backbones for generative modeling. For instance, they can be customized by finetuning with a small collection of images [23,37,58]. ControlNet [77] introduced additional trainable plugins to enable image conditioning such as body skeletons. While these strategies have been widely adopted, none of them directly fit our objective. More relevant to our task is DreamPose [31], which utilizes DensePose [16] images as conditions to repose input images. However, it cannot handle out-of-distribution images due to overfitting. Similarly, Zero-1-to-3 [41] finetunes a diffusion model with multi-view images to allow for viewpoint control. However, we show that viewpoint conditioning is not sufficient for generating consistent human bodies.\nOur model addresses this issue by providing accurate body pose and mask conditions for back-view hallucination." }, { "figure_ref": [ "fig_0" ], "heading": "Methodology", "publication_ref": [ "b50", "b74" ], "table_ref": [], "text": "Method overview. Given an input image of a human body and estimated SMPL-X [51] parameters, SiTH produces a full-body textured mesh. This mesh not only captures the observed appearances but also recovers geometric and textural details in unseen regions, such as clothing wrinkles on the back. The pipeline is composed of two modules and is summarized in Fig. 2. In the first stage, we hallucinate unobserved appearances leveraging the generative power of an image-conditioned diffusion model (Sec. 3.1). In the second stage, we reconstruct a full-body textured mesh given the input front-view image and the generated back-view image as guidance (Sec. 3.2). Notably, both modules are efficiently trained with 500 textured human scans in THuman2.0 [75]." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1", "fig_0" ], "heading": "Back-view Hallucination", "publication_ref": [ "b56", "b54", "b12", "b56", "b36", "b35" ], "table_ref": [], "text": "Preliminaries. Given an input front-view image I F ∈ R H×W ×3 , our goal is to infer a back-view image I B ∈ R H×W ×3 which depicts unobserved body appearances. This task is under-constrained since there are multiple possible solutions to the same input images. Taking this perspective into account, we leverage a latent diffusion model (LDM) [57] to learn a conditional distribution of back-view images given a front-view image. First, a VAE autoencoder, consisting of an encoder E and a decoder D, is pretrained on a corpus of 2D natural images through image reconstruction, i.e. Ĩ = D(E(I)). Afterwards, an LDM learns to produce a latent code z within the VAE latent distribution z = E(I) from randomly sampled noise. To sample an image, a latent code z is obtained by iteratively denoising Gaussian noise. The final image is reconstructed through the decoder, i.e., Ĩ = D(z).\nImage-conditioned diffusion model. Simply applying the LDM architecture to our task is not sufficient since our goal is to learn a conditional distribution of back-view images given an input conditional image. To this end, we make several adaptations to allow for image-conditioning as shown in Fig. 3. First, we utilize the pretrained CLIP [55] image encoder and VAE encoder E to extract image features from the front-view image (i.e., I F ). These image features are used for conditioning the LDM, ensuring the output image shares a consistent appearance with the input image. Second, we follow the idea of ControlNet [77] and propose to use a UV map (I B U V ∈ R H×W ×3 ) and a silhou- ette mask (I B M ∈ R H×W ) from the back view as additional conditions. These conditional signals provide additional information that ensures the output image has a similar body shape and pose to the conditional input image.\nLearning hallucination from pretraining. Another challenge in training an image-conditioned LDM is data. Training the model from scratch is infeasible due to the requirement of a large number of paired images rendered from 3D textured human scans. Inspired by the concept of learning from large-scale pretraining [13,20], we build our image-conditioned LDM on top of a pretrained diffusion U-Net [57]. We utilize the finetuning strategy [37,77] to optimize cross-attention layers and ControlNet parameters while keeping most of the other parameters frozen (see Fig. 3). The design and training strategy of our imageconditioned diffusion model enables hallucinating plausible back-view images that are cosistent with the frontal inputs. Training and inference. To generate pairwise training images from 3D human scans, we sample camera view angles and use orthographic projection to render RGBA images from 3D scans and UV maps from their SMPL-X fits. Given a pair of images rendered by a frontal and its corresponding back camera, the first image serves as the conditional input I F while the other one is the ground-truth image I B . During training, the ground-truth latent code z 0 = E(I B ) is perturbed by the diffusion process in t time steps, resulting in a noisy latent z t . The image-conditoned LDM model ϵ θ aims to predict the added noise ϵ given the noisy latent z t , the time step t ∼ [0, 1000], the conditional image I F , the silhouette mask I B M , and the UV map I B U V (See Fig. 3). The objective function for fine-tuning can be represented as:\nmin θ E z∼E(I),t,ϵ∼N (0,I) ϵ -ϵ θ (z t , t, I F , I B U V , I B M )2 2 .\n(1) At test time, we obtain I B U V , I B M from an off-the-shelf pose predictor [9] and segmentation model [36]. To infer a backview image, we sample a latent z0 by performing the iterative denoising process starting from a Gaussian noise z T ∼ N (0, I). The back-view image can obtained by:\nĨB = D(z 0 ) = D(f θ (z T , I F , I B U V , I B M )),(2)\nwhere f θ is a function representing the iterative denoising process of our image-conditioned LDM (See Fig. 2 left)." }, { "figure_ref": [ "fig_2", "fig_2", "fig_2", "fig_0" ], "heading": "Human Mesh Reconstruction", "publication_ref": [ "b59", "b60", "b59", "b60", "b21", "b78", "b50", "b6" ], "table_ref": [], "text": "After obtaining the back-view image, our goal is to construct a full-body human mesh and its textures using the input and back-view image as guidance. We follow the literature [60,61] to model this task with a data-driven method. Given pairwise training data (i.e., front/back-view images and 3D scans), we learn a data-driven model that maps these images to a 3D representation (e.g., a signed distance field (SDF)). We define this mapping as below:\nΦ : R H×W ×3 × R H×W ×3 × R 3 → R × R 3 (I F , I B , x) → d x , r x ,(3)\nwhere x is the 3D coordinate of a query point, and d x , r x denote the signed distance and RGB color value at point x.\nThe network components we used for learning the mapping function are depicted in Fig. 4. Local feature querying. To learn a generic mapping function that is robust to unseen images, it is important that the model is conditioned solely on local image information with respect to the position of x. Therefore, we employ the idea of pixel-aligned feature querying [60,61] and separate our model into two branches, i.e., color and geometry. Our model contains a normal predictor that converts the RGB image pair (I F , I B ) into normal maps (N F , N B ). Two image feature encoders G d , G r then extract color and geometry feature maps (f d , f r ) ∈ R H ′ ×W ′ ×D from the images and normal maps respectively (for simplicity we describe the process for a single image and leave out the superscripts, but both front and back images are treated the same). Finally, we project the query point x onto the image coordinate (Fig. 4 red points) to retrieve the local features\n(f d,x , f r,x ) ∈ R D : f d,x = B(f d , π(x)) = B(G d (N ), π(x)), f r,x = B(f r , π(x)) = B(G r (I), π(x)),(4)\nwhere B is a local feature querying operation using bilinear interpolation and π(•) denotes orthographic projection.\nLocal positional embedding with skinned body prior.\nAs mentioned in Sec. 1, a major difficulty in mesh reconstruction is 3D ambiguity where a model has to infer unknown depth information between the front and back images. To address this issue, we follow prior work [22,73,79] leveraging a skinned body mesh [51] for guiding the reconstruction task. This body mesh is regarded as an anchor that provides an approximate 3D shape of the human body.\nTo exploit this body prior, we devise a local positional embedding function that transforms the query point x into the local body mesh coordinate system. We look for the closest point x * c on the body mesh (Fig. 4 blue point), i.e.,\nx * c = arg min\nxc ∥x -x c ∥ 2 ,(5)\nwhere x c are points on the skinned body mesh M. Our positional embedding p constitutes four elements: a signed distance value d c between x * c and x, a vector n c = (x-x * c ), the UV coordinates u c ∈ [0, 1] 2 of the point x * c , and a visibility label v c ∈ {1, -1, 0} that indicates whether x * c is visible in the front/back image or neither. Finally, two separate MLPs H d , H r take the positional embedding p = [d c , n c , u c , v c ] and the local texture/geometry features (f d,x , f r,x ) as inputs to predict the final SDF and RGB values at point x:\nd x = H d (f F d,x , f B d,x , p), r x = H r (f F r,x , f B r,x , p).(6)\nTraining and inference. We used the same 3D dataset described in Sec. both branches with the following reconstruction losses:\nL d = ∥d -d x ∥ 1 + λ n (1 -n • ∇ x d x ),(7)\nL r = ∥r -r x ∥ 1 .(8)\nNote that ∇ x indicates numerical finite differences for computing local normals at point x and λ n is a hyperparameter.\nDuring inference, we use the input image I F and the back-view image ĨB obtained from Sec. 3.1 to reconstruct 3D mesh and textures. First, we align both images with the estimated body mesh M to ensure that image features can be properly queried around the 3D anchor. We adopt a similar strategy of SMPLify [7] to optimize the scale and the offset of the body mesh with silhouette and 2D joint errors. Finally, we perform the marching cube algorithm [43] by querying SDF and RGB values within a dense voxel grid via Eq. (3) (see Fig. 2 right)." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b0", "b74", "b44", "b21", "b74", "b44", "b21", "b45", "b77" ], "table_ref": [], "text": "Dataset. Previous work relied on training data from commercial datasets such as RenderPeople [1]. While these datasets offer high-quality textured meshes, they also limit reproducibility due to limited accessibility. For fair comparisons, we follow ICON [73] by training our method on the public 3D dataset THuman2.0 [75] and using the CAPE [45] dataset for evaluation. However, we observed potential biases in the evaluation due to the low-res groundtruth meshes and image rendering defects in the CAPE dataset (for a detailed discussion, please refer to Supp-Sec. 6). Consequently, we further create a new benchmark that evaluates the baselines on a higher-quality 3D human dataset CustomHumans [22]. In the following, we provide a summary of the datasets used in our experiments: • THuman2.0 [75] contains approximately 500 scans of humans wearing 150 different garments in various poses.\nWe use these 3D scans as the training data.\n• CAPE [45] contains 15 subjects in 8 types of tight outfits.\nThe test set, provided by ICON, consists of 100 meshes. We use CAPE for the quantitative evaluation (Sec. 4.2). • CustomHumans [22] contains 600 higher-quality scans of 80 subjects in 120 different garments and varied poses. We selected 60 subjects for all quantitative experiments, user studies, and ablation studies. (Sec. 4.2 -Sec. 4.4) Evaluation protocol. We follow the evaluation protocol in OccNet [46] and ICON [73] to compute 3D metrics Chamfer distance (CD), normal consistency (NC), and f-Score [65] on the generated meshes. To evaluate reconstructed mesh texture, we report LPIPS [78] of front and back texture rendering. In user studies, 30 participants rank the meshes obtained by four different methods. We report the average ranking ranging from 1 (best) to 4 (worst)." }, { "figure_ref": [ "fig_3" ], "heading": "Single-view Human Reconstruction", "publication_ref": [ "b59", "b60", "b78", "b14", "b3", "b17", "b73", "b52", "b40", "b48", "b53", "b62", "b34", "b37" ], "table_ref": [], "text": "Benchmark evaluation. We compared SiTH with stateof-the-art single-view human reconstruction methods, including PIFu [60], PIFuHD [61], PaMIR [79], FOF [15], ICON [73], PHORHUM [4], 2K2K [18], and ECON [74] on CAPE and CustomHumans. Note that PHORHUM is only used for qualitative comparison since a different camera system is used, leading to the misalignment with ground-truth meshes. We visualize the generated mesh texture and normals in Fig. 5. Existing methods produce oversmoothed texture and normals, particularly in the back. Our method not only generates photorealistic and perceptually consistent appearances in unobserved regions but also recovers underlying geometric details like clothing wrinkles.\nThe quantitative results are summarized in Tab. 1. It's worth noting that most methods are trained with commercial datasets ( gray color in Tab. 1), while the others are trained on the public THuman2.0 dataset. To evaluate the methods leveraging a skinned body prior (i.e., PaMIR, ICON, ECON, FOF, and SiTH), we use the same pose alignment procedure in their original implementations for a fair comparison. Results in Tab. 1 show that the method using a body prior (PaMIR) outperformed the end-to-end method (PIFuHD) on tight clothing and challenging poses in CAPE. However, it falls short in handling diverse outfits in Cus-tomHumans. Moreover, the methods trained on commercial datasets achieve better performance than those trained with public data (ICON, ECON). Notably, our method is robust across both benchmarks, achieving performance comparable to the methods trained on high-quality commercial data.\nCompared with optimization-based methods. We compared SiTH with methods that use pretrained diffusion models and a score distillation sampling loss [53] to optimize 3D meshes. In the case of Zero-1-to-3 [41], we used the input image to optimize an instant NGP [49] radiance field, and for Magic-123 [54], we provided additional text prompts to optimize an SDF tetrahedron [63]. From Fig. 7, we see that while both methods can handle full-body textures, they struggle with reasoning the underlying geometry and clothing details. It is worth noting that Zero-1-to-3 and Magic-123 require 10 minutes and 6 hours in optimization, respectively, while our method takes under 2 minutes to generate a textured mesh with a marching cube of 512 3 resolution. Table 2. User study results. Top: 30 users are asked to rank the quality of surface normal images from best (1) to worst (4). We report the average ranking of each method. Middle: Similar to the first task, users are asked to rank the quality of RGB textures.\nBottom: We ask users to choose the mesh with a better quality.\nAdditionally, more similar to our method is Chupa [35], which generates front/back-view normals for mesh reconstruction. Note that Chupa is not conditioned on images and does not generate texture. Instead, we provided body poses and text prompts generated by an image-to-text interrogator [38] as their conditional inputs. From Fig. 7, it's clear that text-conditioning is less accurate than imageconditioning, and the method struggles to generate unseen clothing styles such as coats. By contrast, our method can reconstruct correct clothing geometry and texture from unseen images. We present more discussions and comparisons with optimization-based methods in Supp-Sec. 9.1.\nUser study. The above metrics may not fully capture the quality of 3D meshes in terms of realism and local details.\nTo address this, we conducted a user study to compare the texture and geometry quality among various baselines. We invited 30 users to rank the front/back-view texture and normal renderings of 3D meshes generated by four different methods. Additionally, we asked the users to assess the similarity between the input images and the generated meshes. The results (Tab. which leverages the generative capability of diffusion models, consistently outperforms each baseline. It also produces more preferred front-view textures and geometries, as evidenced by higher user rankings. We also conducted a user study with Chupa (in Tab. 2 bottom) which also indicates more users prefer the 3D meshes generated by our method." }, { "figure_ref": [], "heading": "Generative Capability", "publication_ref": [ "b30", "b40", "b37" ], "table_ref": [], "text": "Image quality comparison. Our hallucination module is a unique and essential component that generates spatially aligned human images to guide 3D mesh reconstruction.\nGiven that our focus is on back-view hallucination, we compare the quality of generated images with the relevant generative methods in Fig. 6. We trained a baseline Pix2PixHD [69] model, which produced smooth and blurry results on unseen images due to overfitting to 500 subjects. Another method closely related to ours is DreamPose [31], which conditions the model with DensePose images and finetunes the diffusion model with paired data. However, their model failed to handle unseen images, in contrast to our approach. While Zero-1-to-3 [41] can generalize to unseen images, their method faces challenges in generating consistent body poses given the same back-view camera. Moreover, we designed another baseline that provides ControlNet [77] for corresponding text prompts using an image-to-text interrogator [38]. However, without proper image conditioning and fine-tuning, such a method cannot generate images that faithfully match the input appearances.\nOur method not only addresses these issues but also handles stochastic appearances (e.g., tiny differences in wrinkles) from different random seeds. We report 2D generative evaluation metrics and more results in Supp-Sec. 8.1." }, { "figure_ref": [ "fig_5", "fig_5" ], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "We conducted controlled experiments to validate the effectiveness of our proposed modules. As shown in Fig. 8, the skinned body mesh is a crucial component for 3D human reconstruction. Without this body mesh as a 3D anchor, the output mesh contains an incorrect body shape due to the depth ambiguity issue. Conversely, removing the hallucination module has minimal impact on 3D reconstruction metrics, though it slightly degrades normal consistency. However, the overall quality in both texture and geometry is incomparable with our full model (see Fig. 8 right). This is consistent with our findings in user studies, indicating that 3D metrics may not accurately reflect the perceptual quality of 3D meshes. Finally, we tested two additional variants, leveraging ground-truth body meshes and real back-view images in our full pipeline, representing the upper bound of our method. As shown in Tab. 3 bottom, this additional information notably improves the 3D metrics. These results highlight the persistent challenges in the single-view reconstruction problem, including pose ambiguity and the stochastic nature of clothing geometry. For more experiments on our design choices, please refer to Supp-Sec. 8.5." }, { "figure_ref": [], "heading": "Applications", "publication_ref": [], "table_ref": [], "text": "Inheriting the generative capability of LDM, SiTH is robust to diverse inputs, such as out-of-distribution or AIgenerated images. We demonstrate a unique solution to link photo-realistic AI photos and high-fidelity 3D humans. In Fig. 9, we introduce a 3D creation workflow integrating powerful text-to-image generative models. Given a body pose, we generate a front-view image using Stable Diffusion and ControlNet using text prompts. SiTH then creates a full-body textured human from the AI-generated image." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We propose an innovative pipeline designed to create fully textured 3D humans from single-view images. Our approach seamlessly integrates an image-conditioned diffusion model into the existing data-driven 3D reconstruction workflow. Leveraging the generative capabilities of the diffusion model, our method efficiently produces lifelike 3D humans from a diverse range of unseen images in under two minutes. We expect our work will advance the application of generative AI in 3D human creation. " }, { "figure_ref": [], "heading": "SiTH: Single-view Textured Human Reconstruction with Image-Conditioned Diffusion", "publication_ref": [], "table_ref": [], "text": "Supplementary Material" }, { "figure_ref": [ "fig_9", "fig_8" ], "heading": "Benchmark Description", "publication_ref": [ "b44", "b21", "b21" ], "table_ref": [], "text": "We provide detailed descriptions of the CAPE [45] and the CustomHumans [22] dataset used for benchmark evaluation in our study. The CAPE dataset includes sequences of posed humans featuring 15 subjects. For evaluation purposes, ICON [73] selected 100 frames, each consisting of RGBA images from three viewpoints and an SMPL+D (vertex displacements) ground-truth mesh. We identified several limitations in the CAPE dataset (refer to Fig. 11) Firstly, there is limited diversity in human outfits, as most subjects wear tight clothing such as t-shirts and shorts. Secondly, the images are rendered from unprocessed point clouds, leading to rendering defects. Lastly, the groundtruth meshes are of low resolution and do not fully correspond to the input images. These issues suggest that experiments conducted solely on the CAPE dataset may be biased.\nTo ensure an unbiased evaluation, we introduced a new benchmark using the higher-quality, publicly available 3D human dataset, CustomHumans [22]. Specifically, we selected 60 textured human scans, each featuring different outfits, for evaluation. For each scan, we rendered test images from four different viewpoints. Note that we directly rasterize the textured scans to obtain the input images, ensuring that the ground-truth mesh precisely corresponds to the images. Fig. 10 showcases samples from our benchmarks, highlighting the increased diversity of the clothing. " }, { "figure_ref": [], "heading": "Implementation Detail 7.1. Back-view Hallucination Module", "publication_ref": [ "b1", "b40", "b74" ], "table_ref": [], "text": "We detail the implementation of our image-conditioned diffusion model described in Sec. 3.1. Our model backbone is based on the Stable Diffusion image variations [2] which leverages CLIP features for cross-attention and VAE features for concatenation in image conditioning. In both training and inference, the pretrained VAE autoencoder and the CLIP image encoder are kept frozen. We initialize the diffusion U-Net's weights using the Zero-1-to-3 [41] model and create a trainable ControlNet [77] model following the default network setups but with an adjustment to the input channels. The ControlNet inputs contain 4 channels of masks and UV images with an optional 4 channels of camera view angles. The camera view angles are essential only when generating images from arbitrary viewpoints (instead of only back-view). The ControlNet model and the diffusion U-Net's cross-attention layers are jointly trained with 512 × 512 resolution multi-view images, rendered from the THuman2.0 dataset [75].\nFor each scan in THuman2.0, we render front-back image pairs from 20 camera angles, resulting in around 10k training pairs. We also randomly change the background colors for data augmentation. For training, we utilize a batch size of 16 images and set the learning rate to 4 × 10 -6 incorporating a constant warmup scheduling. The Control-Net model's conditioning scale is fixed at 1.0. We employ classifier-free guidance in our training, which involves a dropout rate of 0.05 for the image-conditioning. The training takes about two days on one NVIDIA A100 GPU for 10k steps. During inference, we apply a classifier-free guidance scale of 2.5 to obtain the final output images." }, { "figure_ref": [], "heading": "Mesh Reconstruction Module", "publication_ref": [ "b60", "b29" ], "table_ref": [], "text": "We follow the methodology of PIFuHD [61], using the HourGlass [50] and the fully convolutional [30] model as our image feature extractors and the normal predictor, respectively. The feature extractors yield a 32-dimensional feature map for feature querying. Our geometry MLP is designed with five layers of 512-dimensional linear layers, each followed by a leakyReLU activation function. Skip connections are applied at the third, fourth, and fifth layers. On the other hand, the texture MLP comprises four layers of 256-dimensional linear layers, with skip connections at the third and fourth layers.\nWe first train the normal predictor using normal images rendered from the THuman2.0 dataset. We optimize the normal predictor with an L1 reconstruction loss for 600 epochs. Subsequently, we proceed to jointly train the feature extractor and the SDF MLPs with a learning rate of 0.001 and a batch size of 2 scans. The normal predictor is jointly fine-tuned with a learning rate 1 × 10 -5 . We set the hyperparameter λ n to 0.1. During each training iteration, we sample 40,960 query points within a thin shell surrounding the ground-truth mesh surfaces. The entire training process requires approximately five days on a single NVIDIA A100 GPU for 800 epochs on the THuman2.0 dataset. Finally, we train the other feature extractor and the RGB MLPs with a learning rate of 0.001 and a batch size of 2 scans for 200 epochs. During inference, a 3D textured mesh can be reconstructed under two minutes with an NVIDIA 3090 GPU. This includes pose estimation and mask prediction (3s), generation of a back-view image (4.5s), alignment of the body mesh and the input images (10s), and mesh reconstruction at the marching cube resolution of 512 3 (60s)." }, { "figure_ref": [], "heading": "More Experimental Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Image Quality Comparison", "publication_ref": [ "b68", "b30", "b40", "b69", "b77", "b5", "b43" ], "table_ref": [], "text": "We carry out a quantitative evaluation on the images generated by Pix2PixHD [69], DreamPose [31], Zero-1-to-3 [41], ControlNet [77], using ground-truth back-view images for comparison (see Tab. 4). To assess the image quality, we employ various metrics, including multi-scale Structure Similarity (SSIM) [70], Learned Perceptual Image Patch Similarity (LPIPS) [78], Kernel Inception Distance (KID) [6], and 2D joint errors using a pose predictor [44]. Our method demonstrates better performance over the others in terms of similarity, quality, and pose accuracy. In Fig. 12, we present additional results generated by these methods. DreamPose exhibits overfitting issues, failing to accurately generate back-view images with the correct appearances. Although ControlNet successfully predicts images with correct poses, it shows less accuracy in text conditioning, particularly in generating inconsistent appearances. Zero-1-to-3, shows instability in view-point conditioning, resulting in a noticeable variance in the human body poses in the generated images. In contrast, our method" }, { "figure_ref": [], "heading": "DreamPose ControlNet + Interrogate", "publication_ref": [], "table_ref": [], "text": "Zero-1-to-3 SiTH (Ours) Input Image GT References\nFigure 12\n. Qualitative comparison of back-view hallucination. We visualize back-view images generated by the baseline methods. Note that the three different images are sampled from different random seeds. Our results are perceptually close to the ground-truth image in terms of appearances and poses. Moreover, our method also preserves generative stochasticity for handling hairstyles and clothing colors.\nnot only produces more faithful back-view images but also handles stochastic elements such as hairstyles and clothing colors." }, { "figure_ref": [ "fig_10", "fig_10" ], "heading": "3D Reconstruction Plugin", "publication_ref": [], "table_ref": [], "text": "We demonstrate that our hallucination module can be seamlessly integrated into existing single-view clothed human reconstruction pipelines. We implemented variants of ICON, ECON, and PIFuHD by providing them back normal from our generated back-view images (denoted as +BH).\nThese are then compared to the original methods and their respective variants using the Zero-1-to-3 model as a plugin (denoted as -123). As shown in Fig. 13, integrating Zero-1-to-3 with these methods did not produce satisfactory clothing geometry. In contrast, our hallucination module yielded more realistic clothing wrinkles and enhanced the perceptual quality of ICON, ECON, and PIFuHD. Note that even though we provide additional images with these baselines, our pipeline still produced more detailed geometry and correct body shapes. This again verifies the importance and effectiveness of our mesh reconstruction module.\nThe quantitative results, presented in Table Tab. 5, further support these findings. We observed that the combination of Zero-1-to-3 with these methods did not lead to significant improvements. However, our hallucination module slightly enhanced the 3D metrics for ICON and ECON but had a marginally negative impact on PIFuHD. The reason can be observed from Fig. 13 where PIFuHD tends to produce smooth surfaces that result in better numeric performance. ICON and ECON benefit from our hallucinations since their original model produced artifacts and incorrect clothing details. This finding also confirms the necessity of our user studies in Sec. 4.2 since the visual quality is hard to measure by the existing metrics. 5. Generative plugins for 3D reconstruction. We extend the baseline methods with Zero-1-to-3 (denoted as -123) and our hallucination module (denoted as +BH). Our method improves their perceptual qualities without affecting their overall performance. Red and blue indicate improvements and decreases respectively." }, { "figure_ref": [], "heading": "More Benchmark Evaluation", "publication_ref": [ "b4", "b31", "b3", "b17" ], "table_ref": [], "text": "We present detailed descriptions of our benchmark evaluation protocol. For a fair comparison, we generated meshes from all baselines using marching cubes with a resolution of 256. To accurately compare the reconstructed meshes with ground-truth meshes, we utilize the Iterative Closest Point (ICP) algorithm [5] to register reconstructed meshes. This step is crucial for aligning the meshes with ground truth, thereby eliminating issues of scale and depth misalignment of different methods. When calculating the metrics, we sampled 100K points per mesh, and the threshold for computing the f-scores is set to 1cm. To evaluate texture reconstruction, we render front and back-view images of the generated textured meshes using aitviewer [32]. During our evaluations, we noticed that some baselines, specifically PHORHUM [4] and 2K2K [18], cannot handle nonfront-facing images. Therefore, in the manuscript (Tab. 1) all the results used front-facing images. To provide a more comprehensive comparison and an evaluation aligned with real-world use cases, we include results based on images rendered from multiple view angles in Tab. 6. The CAPE and CustomHumans datasets contain images from three and four view angles respectively. Despite marginal degradation, the results indicate that our method consistently outperforms other methods in single-view 3D reconstructing." }, { "figure_ref": [], "heading": "Robustness to View Angles", "publication_ref": [ "b44", "b21" ], "table_ref": [], "text": "Inspired by insights from the previous subsection, we are interested in assessing the robustness of our method against variations in image view angles. To this end, we rendered images by rotating the texture scans by {0, 15, 30, 45, 60, 75, 90} degrees and subsequently computed their perspective 3D reconstruction metrics. This analysis is detailed in Tab. 7. We found that our pipeline CAPE [45] CustomHumans [22] Method 7. Robustness of 3D reconstruction with respect to view angles. We tested our pipeline using the images and textured scans that were rotated by varying view angles. Note that we use GT back-view images and only analyze the robustness of the mesh reconstruction module. The results from these tests demonstrate that our method maintains robustness within a view angle change of up to 45 degrees.\nmaintains robustness with viewpoint perturbations up to 45 degrees. However, a significant increase in the Chamfer distance was observed when the angle increased from 45 to 60 degrees. This difference could stem from potential failures in pose estimation or the underlying assumption that human bodies can be reconstructed from only front and back-view images, which may not hold true at wider angles. These observations provide a strong motivation for future research focused on enhancing the robustness of image reconstruction across varying view angles" }, { "figure_ref": [ "fig_2", "fig_3" ], "heading": "Verification of Design Choices", "publication_ref": [], "table_ref": [], "text": "Image conditioning strategies. We analyzed different strategies to incorporate image-conditioning in the diffusion U-Net. Fig. 14 depicts the effects of using the CLIP image encoder and the VAE image encoder. The results show that simply relying on the CLIP image encoder is not sufficient to provide accurate image conditioning. The clothing appearances cannot be accurately represented in the shared latent space of texts and images. On the other hand, the VAE encoder alone might also lose semantic information, such as male and female, for back-view hallucination. The hairstyles in the back are not consistent with the front-view image. Finally, the combination of both im-age features (CLIP+VAE) complements missing information of each image feature, therefore achieving more plausible results for back-view hallucination.\nControlNet inputs. We conducted controlled experiments to validate the efficacy of using SMPL-X UV maps and silhouette masks as conditioning inputs for our diffusion model. Fig. 15 illustrates the impact of employing different input images on the ControlNet models. Our results show that omitting the silhouette masks (w/o Mask) results in output images that lack consistent body shapes with the input images, especially in areas with garments like skirts. Conversely, while relying solely on silhouette masks (w/o UV Map) ensures shape consistency, the model struggles to differentiate between front and back views. This is particularly evident in the incorrect appearances on the head and face. Notably, the integration of both the silhouette and SMPL-X UV maps leads to more stable and accurate backview hallucinations, thereby validating our approach.\nParameters finetuning. We conducted an analysis of the training strategy for our image-conditioned diffusion model by designing and comparing several training strategies." }, { "figure_ref": [], "heading": "From Scratch", "publication_ref": [], "table_ref": [], "text": "CtrlNet Only CtrlNet+U-Net Full CtrlNet+CrossAtt. Input lustrates the tangible benefits of incorporating normal guidance. Without normal guidance, the mesh surface becomes noticeably smoother, and the model struggles to accurately reconstruct challenging clothing, such as coats. This observation aligns with our findings in Sec. 4.4 and Sec. 8.2, indicating that conventional 3D metrics may not fully capture perceptual quality. Hence, this trade-off highlights the importance of the normal predictor and guidance in achieving high-fidelity 3D human reconstruction." }, { "figure_ref": [], "heading": "Input Ours TeCH", "publication_ref": [], "table_ref": [], "text": "Texture-Geometry Aligned Texture-Geometry Not Aligned Compared to the optimization-based method (TeCH), our method reconstructs consistent facial details and well-aligned mesh texture and geometry. Note that TeCH requires 6 hours to optimize both texture and geometry." }, { "figure_ref": [ "fig_0", "fig_0", "fig_15", "fig_16" ], "heading": "Additional Results", "publication_ref": [], "table_ref": [], "text": "We present more qualitative results in Fig. 20, Fig. 21, Fig. 22, and Fig. 23, demonstrating our method's robustness in handling unseen images sourced from the Internet." }, { "figure_ref": [ "fig_12" ], "heading": "Discussion", "publication_ref": [ "b23", "b2" ], "table_ref": [], "text": "9.1. Data-driven v.s. Optimization\nNumerous concurrent works, such as TECH [24] and Human-SGD [3], propose creating 3D textured humans from single images using optimization-based approaches. These methods primarily build upon pretrained diffusion models and a Score Distillation Sampling loss, with several adaptations. In our discussion, we highlight the unique aspects of our method in comparison. Our method uniquely integrates a diffusion model into the existing data-driven 3D reconstruction workflow. This integration allows us to efficiently exploit 3D supervision to learn a generalized model for single-view reconstruction, thus avoiding the need for costly and time-consuming per-subject optimization. Consequently, our pipeline can generate high-quality textured meshes in under two minutes. Moreover, we observed that the existing optimization-based methods failed to generate 3D meshes having consistent and aligned texture and geometry (Fig. 18). This is due to their requirements of optimizing texture and geometry with separate optimization processes. Instead, our results are more similar to the input images and retain the consistency of both texture and geometry. Lastly, our two-stage pipeline empowers the 3D human creation process with controllability. As demonstrated in Sec. 8.1, our hallucination model handles generative stochasticity and is able to create various plausible back-view images. This feature provides users with the flexibility to choose back-view appearances based on their preferences, instead of solely relying on a random optimization process. However, as previously discussed, our method does have certain limitations. We believe that the further cross-pollination of both methods offers a promising path for future developments in generative 3D human creation." }, { "figure_ref": [ "fig_14", "fig_14", "fig_14" ], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Complex clothing textures. We observed a challenge with the image-conditioned diffusion model in accurately generating complex clothing textures, such as stripes or plaid (Fig. 19 Top). This limitation stems from the image feature resolution using a pretrained VAE image encoder for feature extraction and reconstruction. The model generates output images at a resolution of 512 × 512, yet the diffusion U-Net is limited to processing features of only 64 × 64. Consequently, finer texture details may be lost in the diffusion process. This issue motivates the need for future development of pixel-perfect image-conditioning approaches, which could more accurately capture details in high-resolution images.\nSide-view appearances. Our method follows the established practice in single-view human reconstruction, using a \"sandwich-like\" approach that relies on front and back information. This technique reduces the need for extensive multi-view images for 3D reconstruction. However, as shown in Fig. 19 Middle, a limitation of this method is the loss of detail in side views. A promising direction for future enhancements would be integrating our pipeline with optimization-based methods for a more detailed 3D human creation. Our pipeline currently provides a robust initialization by providing 3D human models with geometric and appearance details. By leveraging this initialization, the lengthy optimization process could be accelerated, making it more effective for creating detailed 3D humans.\nSelf-occlusion. Our mesh reconstruction module struggles to reconstruct appearance details in self-occluded regions, as illustrated in Fig. 19 Bottom. This challenge arises because essential information in these areas is not captured by either front or back-view images, and thus the mesh reconstruction module fails to infer these details. One potential solution is using an optimization process for refinement, as previously suggested. Another promising direction for future work could be developing a hallucination model capable of generating multi-view images with accurate 3D consistency, which would help reduce the self-occluded regions in mesh reconstruction. " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgements. This work was partially supported by the Swiss SERI Consolidation Grant \"AI-PERCEIVE\". We thank Xu Chen for insightful discussions, Manuel Kaufmann for suggestions on writing and the title, and Christoph Gebhardt, Marcel Buehler, and Juan-Ting Lin for their writing advice." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "The results of these training strategies are depicted in Fig. 16, which shows that training a large diffusion model from scratch using only 500 3D scans is impractical. While leveraging large-scale pretraining can mitigate this issue, the CtrlNet Only training strategy fails to generate consistent appearances from front-view images. Alternatively, when we unfroze the parameters in the diffusion U-Net, the model showed improvement in generating images more aligned with the input conditional image. However, this approach led to a limitation where the model consistently produced identical output images, thus compromising its gen- " } ]
A woman in a white T-shirt and a blue tennis skirt" Figure 1. Single-view textured human reconstruction. SiTH is a novel pipeline for creating high-quality and fully textured 3D human meshes from single images. We first hallucinate back-view appearances through an image-conditioned diffusion model, followed by the reconstruction of full-body textured meshes using both the front and back-view images. Our pipeline enables the creation of lifelike and diverse 3D humans from unseen photos (left) and AI-generated images (right).
SiTH: Single-view Textured Human Reconstruction with Image-Conditioned Diffusion
[ { "figure_caption": "Figure 2 .2Figure 2. Method overview.SiTH is a two-stage pipeline composed of back-view hallucination and mesh reconstruction. The back-view hallucination module samples perceptually consistent back-view images through an iterative denoising process conditioned on the input image, UV map, and silhouette mask (Sec. 3.1). Based on the input and generated back-view images, the mesh reconstruction module recovers a full-body mesh and textures leveraging a skinned body prior as guidance (Sec. 3.2). Note that both modules in the pipeline can be trained with the same public 3D human dataset and generalize unseen images.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Training of back-view hallucination module. We employ a pretrained LDM and ControlNet architecture to enable image conditioning. To train our model, we render training pairs of conditional images I F and ground-truth images I B from 3D human scans. Given a noisy image latent zt, the model predicts added noise ϵ given the conditional image I F , UV map I B U V , and mask I B M as conditions. We train the ControlNet model and crossattention layers while keeping other parameters frozen.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Mesh reconstruction module. Given front and backview images (I F , I B ) we predict their normal images (N F , N B ) through a learned normal predictor. A 3D point x is projected onto these images for querying pixel-aligned features (f d,x , fr,x). To leverage human body mesh as guidance, we embed the point x into the local UV coordinates uc, vector nc, distance dc, and visibility vc. Finally, two decoders (H d , Hr) predict SDF and RGB values at x given the positional embedding and pixel-aligned features.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Qualitative comparison on CustomHumans. Top: Results of methods generating mesh and texture. Bottom: Results of methods generating mesh only. Note that single-view reconstruction is not possible to replicate exact back-view texture and geometry. Our method generates realistic texture and clothing wrinkles perceptually close to the real scans while other baselines only produce smooth colors and surfaces in the back regions. Best viewed in color and zoom in.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .Figure 7 .67Figure 6. Qualitative comparison of back-view hallucination. We visualize back-view images generated by the baseline methods. Note that the three different images are sampled from different random seeds. Our results are perceptually close to the ground-truth image in terms of appearances and poses. Moreover, our method also preserves generative stochasticity for handling tiny wrinkle changes.", "figure_data": "", "figure_id": "fig_4", "figure_label": "67", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Ablation study. We visualize back and side-view rendering of the reconstructed meshes. Our full model produced a correct body shape and more realistic clothing geometry.", "figure_data": "", "figure_id": "fig_5", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Contents", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Back-view Hallucination Module . . . . . . 13 7.2. Mesh Reconstruction Module . . . . . . . . 14 8. More Experimental Results 14 8.1. Image Quality Comparison . . . . . . . . . . 14 8.2. 3D Reconstruction Plugin . . . . . . . . . . 15 8.3. More Benchmark Evaluation . . . . . . . . . 16 8.4. Robustness to View Angles . . . . . . . . . 16 8.5. Verification of Design Choices . . . . . . . . 17 8.6. Additional Results . . . . . . . . . . . . . . 19 9. Discussion 19 9.1. Data-driven v.s. Optimization . . . . . . . . 19 9.2. Limitations . . . . . . . . . . . . . . . . . . 20", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 10 .10Figure 10. Examples of images and ground-truth scans in Cus-tomHumans. Our new benchmark contains diverse and challenging human scans for evaluation.", "figure_data": "", "figure_id": "fig_8", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 .11Figure 11. Defects in the CAPE dataset. The rendering defects from incomplete point clouds result in a notable discrepancy between the input images and the ground-truth meshes in the CAPE dataset.", "figure_data": "", "figure_id": "fig_9", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 13 .13Figure 13. Reconstruction plugin. We replaced the back normal images typically used in existing 3D reconstruction methods with our generated back-view images. This modification enhances the perceptual qualities of these baseline methods.", "figure_data": "", "figure_id": "fig_10", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 16 .Figure 17 .1617Figure 16. Analysis of different network training strategies. We visualize the images generated by employing different network training strategies. We show that our method produces images with consistent appearances and is able to generate diverse hairstyles and clothing details. Note that the four different images are sampled from different random seeds. Best viewed in color and zoom in.", "figure_data": "", "figure_id": "fig_11", "figure_label": "1617", "figure_type": "figure" }, { "figure_caption": "Figure 18 .18Figure 18. Comparison with SDS optimization-based method.Compared to the optimization-based method (TeCH), our method reconstructs consistent facial details and well-aligned mesh texture and geometry. Note that TeCH requires 6 hours to optimize both texture and geometry.", "figure_data": "", "figure_id": "fig_12", "figure_label": "18", "figure_type": "figure" }, { "figure_caption": "Figure 19 .19Figure 19. Limitations. Top: The hallucination model struggles with complex textures like stripes or plaid. Middle: Side-view appearances are not accurately recovered by mesh reconstruction. Bottom: The mesh reconstruction model is unable to effectively handle self-occluded regions.", "figure_data": "", "figure_id": "fig_14", "figure_label": "19", "figure_type": "figure" }, { "figure_caption": "Figure 22 .22Figure 22. Examples of reconstruction from Internet images. Our method generates realistic clothing wrinkles in the back regions. Best viewed in color and zoom in.", "figure_data": "", "figure_id": "fig_15", "figure_label": "22", "figure_type": "figure" }, { "figure_caption": "Figure 23 .23Figure 23. Examples of reconstruction from Internet images. Our method generates realistic clothing wrinkles in the back regions. Best viewed in color and zoom in.", "figure_data": "", "figure_id": "fig_16", "figure_label": "23", "figure_type": "figure" }, { "figure_caption": "3.1 to render training image pairs (I F , I B ) from the 3D textured scans. For each training scan, query points x are sampled within a 3-dimensional cube [-1, 1] 3 . Single-view human reconstruction benchmarks. We report Chamfer distance (CD), normal consistency (NC), and f-score between ground truth and predicted meshes. To evaluate texture reconstruction quality, we compute LPIPS between the image rendering of GT and generated textures. The best and the second best methods are highlighted in bold and underlined respectively. Note that gray color denotes models trained on more commercial 3D human scans while the others are trained on the public THuman2.0 dataset.", "figure_data": "CAPE [45]CustomHuman [22]MethodCD: P-to-S / S-to-P (cm)↓NC↑f-Score↑LPIPS: F (×10 -2 ) ↓CD: P-to-S / S-to-P (cm)↓NC↑f-Score↑LPIPS: F / B (×10 -2 ) ↓PIFu [60]2.368 / 3.763 0.77833.8422.7202.209 / 2.582 0.80534.8816.073 / 8.496PIFuHD [61]2.401 / 3.522 0.77235.706-2.107 / 2.228 0.80439.076-PaMIR [79]2.190 / 2.806 0.80436.7252.0852.181 / 2.507 0.81335.8474.646 / 7.1522K2K [18]2.478 / 3.683 0.78228.700-2.488 / 3.292 0.79630.186-FOF [15]2.196 / 4.040 0.77734.227-2.079 / 2.644 0.80836.013-ICON [73]2.516 / 3.079 0.78629.630-2.256 / 2.795 0.79130.437-ECON [74]2.475 / 2.970 0.78830.488-2.483 / 2.680 0.79730.894-SiTH (Ours)1.899 / 2.261 0.81637.7631.9771.871 / 2.045 0.82637.0293.929 / 6.803", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Ablation study on CustomHumans. We ablate the hallucination module and the skinned body mesh in our pipeline. Please refer to our discussion in Sec. 4.4.", "figure_data": "MethodCD (cm)↓NC↑f-Score↑W/o Body Mesh2.4710.80133.244W/o Hallucination1.9600.84036.677Full Pipeline1.9580.82637.029W/ GT Body Mesh1.1720.89158.858W/ GT Body and I B1.0590.91463.356", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Hallucination", "figure_data": "MethodSSIM↑ LPIPS↓KID (×10 -3 )↓Joints Err. (pixel)↓Pix2PixHD [69]0.8160.14186.253.1DreamPose [31]0.8440.13286.776.7Zero-1-to-3 [41]0.8620.11930.073.4ControlNet [77] +Interrogate0.8510.20239.035.7SiTH (Ours)0.9500.0633.221.5", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Single-view human reconstruction from multiple viewpoints. We report Chamfer distance, normal consistency (NC), and f-score between ground truth and predicted meshes. Note that gray color denotes models trained on more commercial 3D human scans while the others are trained on with the public THuman2.0 dataset.", "figure_data": "Pred-to-Scan / Scan-to-Pred (mm)↓NC↑f-Score↑Pred-to-Scan / Scan-to-Pred (mm)↓NC↑f-Score↑PIFu [60]26.359 / 40.6420.75529.28324.765 / 34.0070.78031.911PIFuHD [61]25.644 / 38.0500.75532.15723.004 / 30.0390.78536.311FOF [15]21.671 / 37.2460.77833.97121.995 / 31.0760.78934.403PaMIR [79]24.737 / 33.0490.78231.62123.471 / 30.0230.79734.404ICON [73]27.897 / 36.9070.75725.89825.957 / 37.8570.76326.857ECON [74]27.333 / 34.3640.76526.96027.447 / 38.8580.75727.075SiTH (Ours)21.324 / 29.0500.79134.19920.513 / 28.9230.80435.824AnglePred-to-Scan / Scan-to-Pred (mm)↓NC↑f-Score↑∆ CD∆ NC∆ f-Score0 •16.880 / 20.3140.842339.850---15 •16.428 / 20.1770.842839.971-0.452 / -0.137+0.0005+0.12130 •17.806 / 22.8020.830537.154+0.926 / +2.488-0.0118-2.69645 •18.585 / 23.3080.824335.652+1.705 / +2.994-0.0180-4.19860 •20.404 / 29.5190.805233.675+3.524 / +9.205-0.0371-6.17575 •22.111 / 33.3090.796032.334+5.231 / +12.995 -0.0463-7.51690 •23.752 / 38.3380.781630.011+6.872 / +18.024 -0.0607-9.839Table", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" } ]
Hsuan-I Ho; Jie Song; Otmar Hilliges
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "Renderpeople", "year": "" }, { "authors": "", "journal": "", "ref_id": "b1", "title": "Stable diffusion image variations", "year": "" }, { "authors": "Badour Albahar; Shunsuke Saito; Hung-Yu Tseng; Changil Kim; Johannes Kopf; Jia-Bin Huang", "journal": "", "ref_id": "b2", "title": "Single-image 3d human digitization with shape-guided diffusion", "year": "2023" }, { "authors": "Thiemo Alldieck; Mihai Zanfir; Cristian Sminchisescu", "journal": "", "ref_id": "b3", "title": "Photorealistic monocular 3d reconstruction of humans wearing clothing", "year": "2022" }, { "authors": "J Paul; Neil D Besl; Mckay", "journal": "Spie", "ref_id": "b4", "title": "Method for registration of 3-d shapes", "year": "1992" }, { "authors": "Mikolaj Binkowski; Danica J Sutherland; Michael Arbel; Arthur Gretton", "journal": "", "ref_id": "b5", "title": "Demystifying MMD gans", "year": "2018" }, { "authors": "Federica Bogo; Angjoo Kanazawa; Christoph Lassner; Peter Gehler; Javier Romero; Michael J Black", "journal": "Springer International Publishing", "ref_id": "b6", "title": "Keep it SMPL: Automatic estimation of 3D human pose and shape from a single image", "year": "2016" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in Neural Information Processing Systems (NeurIPS)", "ref_id": "b7", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Zhongang Cai; Wanqi Yin; Ailing Zeng; Chen Wei; Qingping Sun; Yanjun Wang; Hui En Pang; Haiyi Mei; Mingyuan Zhang; Lei Zhang; Chen Change Loy; Lei Yang; Ziwei Liu", "journal": "", "ref_id": "b8", "title": "Smpler-x: Scaling up expressive human pose and shape estimation", "year": "2023" }, { "authors": "Yukang Cao; Yan-Pei Cao; Kai Han; Ying Shan; Kwan-Yee K Wong", "journal": "", "ref_id": "b9", "title": "Dreamavatar: Text-and-shape guided 3d human avatar generation via diffusion models", "year": "2023" }, { "authors": "Rui Chen; Yongwei Chen; Ningxin Jiao; Kui Jia", "journal": "", "ref_id": "b10", "title": "Fantasia3d: Disentangling geometry and appearance for highquality text-to-3d content creation", "year": "2023" }, { "authors": "Enric Corona; Mihai Zanfir; Thiemo Alldieck; Eduard Gabriel Bazavan; Andrei Zanfir; Cristian Sminchisescu", "journal": "", "ref_id": "b11", "title": "Structured 3d features for reconstructing relightable and animatable avatars", "year": "2023" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b12", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Zijian Dong; Chen Guo; Jie Song; Xu Chen; Andreas Geiger; Otmar Hilliges", "journal": "", "ref_id": "b13", "title": "Pina: Learning a personalized implicit neural avatar from a single rgb-d video sequence", "year": "2022" }, { "authors": "Qiao Feng; Yebin Liu; Yu-Kun Lai; Jingyu Yang; Kun Li", "journal": "", "ref_id": "b14", "title": "Fof: Learning fourier occupancy field for monocular real-time human reconstruction", "year": "2022" }, { "authors": "Alp Rıza; Natalia Güler; Iasonas Neverova; Kokkinos", "journal": "", "ref_id": "b15", "title": "Densepose: Dense human pose estimation in the wild", "year": "2018" }, { "authors": "Chen Guo; Tianjian Jiang; Xu Chen; Jie Song; Otmar Hilliges", "journal": "", "ref_id": "b16", "title": "Vid2avatar: 3d avatar reconstruction from videos in the wild via self-supervised scene decomposition", "year": "2023" }, { "authors": "Sang-Hun Han; Min-Gyu Park; Ju Hong Yoon; Ju-Mi Kang; Young-Jae Park; Hae-Gon Jeon", "journal": "", "ref_id": "b17", "title": "High-fidelity 3d human digitization from single 2k resolution images", "year": "2023" }, { "authors": "Ayaan Haque; Matthew Tancik; Alexei Efros; Aleksander Holynski; Angjoo Kanazawa", "journal": "", "ref_id": "b18", "title": "Instruct-nerf2nerf: Editing 3d scenes with instructions", "year": "2023" }, { "authors": "Kaiming He; Xinlei Chen; Saining Xie; Yanghao Li; Piotr Dollár; Ross Girshick", "journal": "", "ref_id": "b19", "title": "Masked autoencoders are scalable vision learners", "year": "2022" }, { "authors": "Tong He; Yuanlu Xu; Shunsuke Saito; Stefano Soatto; Tony Tung", "journal": "", "ref_id": "b20", "title": "Arch++: Animation-ready clothed human reconstruction revisited", "year": "2021" }, { "authors": "Hsuan-I Ho; Lixin Xue; Jie Song; Otmar Hilliges", "journal": "", "ref_id": "b21", "title": "Learning locally editable virtual humans", "year": "2023" }, { "authors": "J Edward; Yelong Hu; Phillip Shen; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen", "journal": "", "ref_id": "b22", "title": "LoRA: Low-rank adaptation of large language models", "year": "2022" }, { "authors": "Yangyi Huang; Hongwei Yi; Yuliang Xiu; Tingting Liao; Jiaxiang Tang; Deng Cai; Justus Thies", "journal": "", "ref_id": "b23", "title": "TeCH: Text-guided Reconstruction of Lifelike Clothed Humans", "year": "2024" }, { "authors": "Zeng Huang; Yuanlu Xu; Christoph Lassner; Hao Li; Tony Tung", "journal": "", "ref_id": "b24", "title": "Arch: Animatable reconstruction of clothed humans", "year": "2020" }, { "authors": "Boyi Jiang; Yang Hong; Hujun Bao; Juyong Zhang", "journal": "", "ref_id": "b25", "title": "Selfrecon: Self reconstruction your digital avatar from monocular video", "year": "2022" }, { "authors": "Ruixiang Jiang; Can Wang; Jingbo Zhang; Menglei Chai; Mingming He; Dongdong Chen; Jing Liao", "journal": "", "ref_id": "b26", "title": "Avatarcraft: Transforming text into neural human avatars with parameterized shape and pose control", "year": "2023" }, { "authors": "Tianjian Jiang; Xu Chen; Jie Song; Otmar Hilliges", "journal": "", "ref_id": "b27", "title": "Instantavatar: Learning avatars from monocular video in 60 seconds", "year": "2023" }, { "authors": "Wei Jiang; Kwang Moo Yi; Golnoosh Samei; Oncel Tuzel; Anurag Ranjan", "journal": "", "ref_id": "b28", "title": "Neuman: Neural human radiance field from a single video", "year": "2022" }, { "authors": "Justin Johnson; Alexandre Alahi; Li Fei-Fei", "journal": "Springer", "ref_id": "b29", "title": "Perceptual losses for real-time style transfer and super-resolution", "year": "2016" }, { "authors": "Johanna Karras; Aleksander Holynski; Ting-Chun; Ira Wang; Kemelmacher-Shlizerman", "journal": "", "ref_id": "b30", "title": "Dreampose: Fashion image-to-video synthesis via stable diffusion", "year": "2023" }, { "authors": "Manuel Kaufmann; Velko Vechev; Dario Mylonopoulos", "journal": "aitviewer", "ref_id": "b31", "title": "", "year": "2022" }, { "authors": "Michael Kazhdan; Hugues Hoppe", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b32", "title": "Screened poisson surface reconstruction", "year": "2013" }, { "authors": "Matthew Michael Kazhdan; Hugues Bolitho; Hoppe", "journal": "", "ref_id": "b33", "title": "Poisson surface reconstruction", "year": "2006" }, { "authors": "Byungjun Kim; Patrick Kwon; Kwangho Lee; Myunggi Lee; Sookwan Han; Daesik Kim; Hanbyul Joo", "journal": "", "ref_id": "b34", "title": "Chupa: Carving 3d clothed humans from skinned shape priors using 2d diffusion probabilistic models", "year": "2023" }, { "authors": "Alexander Kirillov; Eric Mintun; Nikhila Ravi; Hanzi Mao; Chloe Rolland; Laura Gustafson; Tete Xiao; Spencer Whitehead; Alexander C Berg; Wan-Yen Lo; Piotr Dollár; Ross Girshick", "journal": "", "ref_id": "b35", "title": "Segment anything", "year": "2023" }, { "authors": "Nupur Kumari; Bingliang Zhang; Richard Zhang; Eli Shechtman; Jun-Yan Zhu", "journal": "", "ref_id": "b36", "title": "Multi-concept customization of text-to-image diffusion", "year": "2023" }, { "authors": "Junnan Li; Dongxu Li; Caiming Xiong; Steven Hoi", "journal": "", "ref_id": "b37", "title": "Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation", "year": "2022" }, { "authors": "Tingting Liao; Xiaomei Zhang; Yuliang Xiu; Hongwei Yi; Xudong Liu; Guo-Jun Qi; Yong Zhang; Xuan Wang; Xiangyu Zhu; Zhen Lei", "journal": "", "ref_id": "b38", "title": "High-Fidelity Clothed Avatar Reconstruction from a Single Image", "year": "2023" }, { "authors": "Chen-Hsuan Lin; Jun Gao; Luming Tang; Towaki Takikawa; Xiaohui Zeng; Xun Huang; Karsten Kreis; Sanja Fidler; Ming-Yu Liu; Tsung-Yi Lin", "journal": "", "ref_id": "b39", "title": "Magic3d: High-resolution text-to-3d content creation", "year": "2023" }, { "authors": "Ruoshi Liu; Rundi Wu; Basile Van Hoorick; Pavel Tokmakov; Sergey Zakharov; Carl Vondrick", "journal": "", "ref_id": "b40", "title": "Zero-1-to-3: Zero-shot one image to 3d object", "year": "2023" }, { "authors": "Matthew Loper; Naureen Mahmood; Javier Romero; Gerard Pons-Moll; Michael J Black", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b41", "title": "SMPL: A skinned multiperson linear model", "year": "2015" }, { "authors": "E William; Harvey E Lorensen; Cline", "journal": "", "ref_id": "b42", "title": "Marching cubes: A high resolution 3d surface construction algorithm", "year": "1998" }, { "authors": "Camillo Lugaresi; Jiuqiang Tang; Hadon Nash; Chris Mc-Clanahan; Esha Uboweja; Michael Hays; Fan Zhang; Chuo-Ling Chang; Ming Yong; Juhyun Lee; Wan-Teh Chang; Wei Hua; Manfred Georg; Matthias Grundmann", "journal": "", "ref_id": "b43", "title": "Mediapipe: A framework for perceiving and processing reality", "year": "2019" }, { "authors": "Qianli Ma; Jinlong Yang; Anurag Ranjan; Sergi Pujades; Gerard Pons-Moll; Siyu Tang; Michael J Black", "journal": "", "ref_id": "b44", "title": "Learning to Dress 3D People in Generative Clothing", "year": "2020" }, { "authors": "Lars Mescheder; Michael Oechsle; Michael Niemeyer; Sebastian Nowozin; Andreas Geiger", "journal": "", "ref_id": "b45", "title": "Occupancy networks: Learning 3d reconstruction in function space", "year": "2019" }, { "authors": "Gal Metzer; Elad Richardson; Or Patashnik; Raja Giryes; Daniel Cohen-Or", "journal": "", "ref_id": "b46", "title": "Latent-nerf for shape-guided generation of 3d shapes and textures", "year": "2023" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "", "ref_id": "b47", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2020" }, { "authors": "Thomas Müller; Alex Evans; Christoph Schied; Alexander Keller", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b48", "title": "Instant neural graphics primitives with a multiresolution hash encoding", "year": "2022" }, { "authors": "Alejandro Newell; Kaiyu Yang; Jia Deng", "journal": "Springer", "ref_id": "b49", "title": "Stacked hourglass networks for human pose estimation", "year": "2016" }, { "authors": "Georgios Pavlakos; Vasileios Choutas; Nima Ghorbani; Timo Bolkart; Dimitrios Ahmed Aa Osman; Michael J Tzionas; Black", "journal": "", "ref_id": "b50", "title": "Expressive body capture: 3d hands, face, and body from a single image", "year": "2019" }, { "authors": "Dustin Podell; Zion English; Kyle Lacey; Andreas Blattmann; Tim Dockhorn; Jonas Müller; Joe Penna; Robin Rombach", "journal": "", "ref_id": "b51", "title": "Sdxl: Improving latent diffusion models for high-resolution image synthesis", "year": "2023" }, { "authors": "Ben Poole; Ajay Jain; Jonathan T Barron; Ben Mildenhall", "journal": "", "ref_id": "b52", "title": "Dreamfusion: Text-to-3d using 2d diffusion", "year": "2023" }, { "authors": "Guocheng Qian; Jinjie Mai; Abdullah Hamdi; Jian Ren; Aliaksandr Siarohin; Bing Li; Hsin-Ying Lee; Ivan Skorokhodov; Peter Wonka; Sergey Tulyakov; Bernard Ghanem", "journal": "", "ref_id": "b53", "title": "Magic123: One image to high-quality 3d object generation using both 2d and 3d diffusion priors", "year": "2023" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "", "ref_id": "b54", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b55", "title": "Hierarchical text-conditional image generation with clip latents", "year": "2022" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b56", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Nataniel Ruiz; Yuanzhen Li; Varun Jampani; Yael Pritch; Michael Rubinstein; Kfir Aberman", "journal": "", "ref_id": "b57", "title": "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation", "year": "2023" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily L Denton; Kamyar Ghasemipour; Raphael Gontijo Lopes; Burcu Karagol Ayan; Tim Salimans", "journal": "Advances in Neural Information Processing Systems (NeurIPS)", "ref_id": "b58", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Shunsuke Saito; Zeng Huang; Ryota Natsume; Shigeo Morishima; Angjoo Kanazawa; Hao Li", "journal": "", "ref_id": "b59", "title": "Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization", "year": "2019" }, { "authors": "Shunsuke Saito; Tomas Simon; Jason Saragih; Hanbyul Joo", "journal": "", "ref_id": "b60", "title": "Pifuhd: Multi-level pixel-aligned implicit function for high-resolution 3d human digitization", "year": "2020" }, { "authors": "Kaiyue Shen; Chen Guo; Manuel Kaufmann; Juan Zarate; Julien Valentin; Jie Song; Otmar Hilliges", "journal": "", "ref_id": "b61", "title": "X-avatar: Expressive human avatars", "year": "2023" }, { "authors": "Tianchang Shen; Jun Gao; Kangxue Yin; Ming-Yu Liu; Sanja Fidler", "journal": "", "ref_id": "b62", "title": "Deep marching tetrahedra: a hybrid representation for high-resolution 3d shape synthesis", "year": "2021" }, { "authors": "David Svitov; Dmitrii Gudkov; Renat Bashirov; Victor Lempitsky", "journal": "", "ref_id": "b63", "title": "Dinar: Diffusion inpainting of neural textures for one-shot human avatars", "year": "2023" }, { "authors": "Maxim Tatarchenko; Stephan R Richter; René Ranftl; Zhuwen Li; Vladlen Koltun; Thomas Brox", "journal": "", "ref_id": "b64", "title": "What do single-view 3d reconstruction networks learn?", "year": "2019" }, { "authors": "Ayush Tewari; Justus Thies; Ben Mildenhall; Pratul Srinivasan; Edgar Tretschk; W Yifan; Christoph Lassner; Vincent Sitzmann; Ricardo Martin-Brualla; Stephen Lombardi", "journal": "Wiley Online Library", "ref_id": "b65", "title": "Advances in neural rendering", "year": "2022" }, { "authors": "Yating Tian; Hongwen Zhang; Yebin Liu; Limin Wang", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)", "ref_id": "b66", "title": "Recovering 3d human mesh from monocular images: A survey", "year": "2023" }, { "authors": "Haochen Wang; Xiaodan Du; Jiahao Li; Raymond A Yeh; Greg Shakhnarovich", "journal": "", "ref_id": "b67", "title": "Score jacobian chaining: Lifting pretrained 2d diffusion models for 3d generation", "year": "2023" }, { "authors": " Ting-Chun; Ming-Yu Wang; Jun-Yan Liu; Andrew Zhu; Jan Tao; Bryan Kautz; Catanzaro", "journal": "", "ref_id": "b68", "title": "High-resolution image synthesis and semantic manipulation with conditional gans", "year": "2018" }, { "authors": "Zhou Wang; Eero P Simoncelli; Alan C Bovik", "journal": "", "ref_id": "b69", "title": "Multiscale structural similarity for image quality assessment", "year": "2003" }, { "authors": "Chung-Yi Weng; Brian Curless; P Pratul; Jonathan T Srinivasan; Ira Barron; Kemelmacher-Shlizerman", "journal": "", "ref_id": "b70", "title": "Hu-manNeRF: Free-viewpoint rendering of moving people from monocular video", "year": "2022" }, { "authors": "Yiheng Xie; Towaki Takikawa; Shunsuke Saito; Or Litany; Shiqin Yan; Numair Khan; Federico Tombari; James Tompkin; Vincent Sitzmann; Srinath Sridhar", "journal": "Wiley Online Library", "ref_id": "b71", "title": "Neural fields in visual computing and beyond", "year": "2022" }, { "authors": "Yuliang Xiu; Jinlong Yang; Dimitrios Tzionas; Michael J Black", "journal": "", "ref_id": "b72", "title": "ICON: Implicit Clothed humans Obtained from Normals", "year": "2022" }, { "authors": "Yuliang Xiu; Jinlong Yang; Xu Cao; Dimitrios Tzionas; Michael J Black", "journal": "", "ref_id": "b73", "title": "ECON: Explicit Clothed humans Optimized via Normal integration", "year": "2023" }, { "authors": "Tao Yu; Zerong Zheng; Kaiwen Guo; Pengpeng Liu; Qionghai Dai; Yebin Liu", "journal": "", "ref_id": "b74", "title": "Function4d: Real-time human volumetric capture from very sparse consumer rgbd sensors", "year": "2021" }, { "authors": "Huichao Zhang; Bowen Chen; Hao Yang; Liao Qu; Xu Wang; Li Chen; Chao Long; Feida Zhu; Kang Du; Min Zheng", "journal": "", "ref_id": "b75", "title": "Avatarverse: High-quality stable 3d avatar creation from text and pose", "year": "2023" }, { "authors": "Lvmin Zhang; Anyi Rao; Maneesh Agrawala", "journal": "", "ref_id": "b76", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" }, { "authors": "Richard Zhang; Phillip Isola; Alexei A Efros; Eli Shechtman; Oliver Wang", "journal": "", "ref_id": "b77", "title": "The unreasonable effectiveness of deep features as a perceptual metric", "year": "2018" }, { "authors": "Zerong Zheng; Tao Yu; Yebin Liu; Qionghai Dai", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)", "ref_id": "b78", "title": "Pamir: Parametric model-conditioned implicit representation for image-based human reconstruction", "year": "2021" } ]
[ { "formula_coordinates": [ 4, 317.84, 361.71, 218.3, 19.04 ], "formula_id": "formula_0", "formula_text": "min θ E z∼E(I),t,ϵ∼N (0,I) ϵ -ϵ θ (z t , t, I F , I B U V , I B M )2 2 ." }, { "formula_coordinates": [ 4, 345.34, 461.83, 199.77, 13.14 ], "formula_id": "formula_1", "formula_text": "ĨB = D(z 0 ) = D(f θ (z T , I F , I B U V , I B M )),(2)" }, { "formula_coordinates": [ 4, 337.68, 639.85, 207.43, 28.17 ], "formula_id": "formula_2", "formula_text": "Φ : R H×W ×3 × R H×W ×3 × R 3 → R × R 3 (I F , I B , x) → d x , r x ,(3)" }, { "formula_coordinates": [ 5, 50.11, 535.41, 236.25, 46.76 ], "formula_id": "formula_3", "formula_text": "(f d,x , f r,x ) ∈ R D : f d,x = B(f d , π(x)) = B(G d (N ), π(x)), f r,x = B(f r , π(x)) = B(G r (I), π(x)),(4)" }, { "formula_coordinates": [ 5, 418.93, 416.42, 126.18, 14.17 ], "formula_id": "formula_4", "formula_text": "xc ∥x -x c ∥ 2 ,(5)" }, { "formula_coordinates": [ 5, 380.88, 571.72, 164.23, 29.38 ], "formula_id": "formula_5", "formula_text": "d x = H d (f F d,x , f B d,x , p), r x = H r (f F r,x , f B r,x , p).(6)" }, { "formula_coordinates": [ 6, 90.64, 298.07, 195.73, 9.68 ], "formula_id": "formula_6", "formula_text": "L d = ∥d -d x ∥ 1 + λ n (1 -n • ∇ x d x ),(7)" }, { "formula_coordinates": [ 6, 134.04, 318.3, 152.32, 9.68 ], "formula_id": "formula_7", "formula_text": "L r = ∥r -r x ∥ 1 .(8)" } ]
2024-03-27
[ { "figure_ref": [], "heading": " 1 ", "publication_ref": [], "table_ref": [], "text": "The Chinese University of Hong Kong 2 Shanghai Artificial Intelligence Laboratory {wz122,wj020,ly122,dhlin}@ie.cuhk.edu.hk, daibo@pjlab.org.cn Abstract. Text-conditioned human motion synthesis has made remarkable progress with the emergence of diffusion models in recent research. However, the majority of these motion diffusion models are primarily designed for a single character and overlook multi-human interactions. In our approach, we strive to explore this problem by synthesizing human motion with interactions for a group of characters of any size. The key aspect of our approach is the adaptation of human-wise interactions as pairs of human joints that can be either in contact or separated by a desired distance. In contrast to existing methods that necessitate training motion generation models on multi-human motion datasets with a fixed number of characters, our approach inherently possesses the flexibility to model human interactions involving an arbitrary number of individuals, thereby transcending the limitations imposed by the training data. We introduce a novel controllable motion generation method, InterControl, to encourage the synthesized motions maintaining the desired distance between joint pairs. It consists of a motion controller and an inverse" }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b30", "b52", "b4", "b5", "b12", "b13", "b14", "b44", "b45", "b54", "b69", "b9", "b18", "b54", "b2", "b17", "b33", "b34", "b55", "b0", "b15", "b41", "b42", "b35", "b62", "b0", "b12", "b54", "b12", "b54", "b50", "b54", "b26", "b54", "b68", "b54", "b1", "b43", "b8", "b36", "b12", "b46", "b0", "b1" ], "table_ref": [], "text": "Generating realistic and diverse human motions is a vital task in computer vision, as it has diverse applications in VR/AR, games, and films. In recent years, great progress has been achieved in human motion generation by introducing VAE [31], Diffusion Model [23,54] and large language models [5]. These methods commonly investigated single-person motion generation given texts or action classes [6,[13][14][15]46,47,56,71], part of motion [10,19,56], or other related modalities [3,18,34,35,57], yet overlooked multi-person interactions. By naively putting their generated single-person motions in a shared global space, such motions could easily penetrate each other. They cannot even perform simple interactions like handshaking due to lack of the ability to control two people's hands to reach the same location at the same time. Many multi-person datasets [1,16,42,43] lacks text annotations and focus on motion completion given prefix motions. Recently, InterGen [36] collected a two-person interaction generation dataset, and let model to learn two-person motions from data. It is limited by the fixed number of characters and cannot generalize to arbitrary numbers. Previous methods commonly ignore a good design for general interaction modeling. This paper investigates a special yet widely used form of human interactions: interactions that could be quantitatively described by spatial relations of human joints, such as distances or orientations, as shown in Fig. 1 (a) and (b). Such interactions are conceptually simple, as their semantics are almost from spatial relations. Thus, such types of interactions do not require additional interaction data. It only needs pretrained models from single-person data and could be generalized to an arbitrary number of humans. We define human interactions as steps of joint-joint contact pairs and devise a single-person motion generation model to take such contact pairs as control signals. Besides, orientations could also be used in control, such as making two people face each other. In this way, interaction generation is transformed to controllable motion generation. Inspired by [64], we adapt descriptions of interactions as joint contact pairs by leveraging Large Language Models (LLMs). Thus, human interactions are annotation-free, and interactions could also involve multiple human joints.\nAs interactions are adapted to our defined joint contact pairs, the key challenge to generate interactions is the precise spatial control to satisfy the con-straint of spatial controls. This difficulty lies in two parts: (1) the discrepancy between control signals in global space and relative motion representation in mainstream pretrained models [13,56]: As semantics of motions are independent to global locations, previous works [13,56] commonly utilize the relative motions, where global locations could only be inferred by aggregating velocities. It poses challenges to control local human poses with global conditions. Previous attempts [52,56] exploit the inpainting ability of a pretrained model, yet they are unable to control global joints. GMD [27] proposes a two-stage model of separated root trajectory generation and local pose generation. Although it manages to control root positions, controlling every joint at any time is still infeasible.\n(2) the sparse control signals in the motion sequence: Control signals could be sparse in both temporal and joint dimension, model needs to adaptively adjust trajectories in uncontrolled frames to satisfy the intermittent constraints.\nIn this paper, we propose InterControl, a novel human interaction generation method that is able to precisely control the position of any joint at any time for any person, and it is only trained on single-person motion data. By adding spatial controls to MDM [56], InterControl is a unified framework of two types of spatial control modules: (1) Motion ControlNet inspired by ControlNet [70]: It is initialized from a pretrained MDM [56] and takes global spatial locations as input for joint control in the global space. It is able to generate coherent and high-fidelity motions yet joint positions in global space are not perfect. (2) Inverse Kinematics (IK) Guidance for joint locations: To further align generated motions and spatial conditions precisely, we use inverse kinematics (IK) [45] to guide the denoising steps towards desired positions. It could be regarded as a classifier guidance [9], yet it has no extra classifiers. We utilize L-BFGS [37] as the optimizer to directly align the global conditions in the local space. With two proposed modules, InterControl is able to control multiple joints of any person at any time. Furthermore, InterControl is able to jointly optimize multiple types of spatial controls, such as orientation alignment, collision avoidance, and joint contacts, as long as the distance measures in IK guidance are differentiable. By exploiting its joint control ability, our model is able to generate multi-person interactions with rich contacts, where no multi-person interaction datasets are needed. Our generated interactions could further serve as the reference motion to generate physical animation with meaningful human-wise reactions in simulators. As shown in Fig. 1 (c), one character could actually hit down the other with his fists by taking our generated fighting motions as input. Extensive experiments in HumanML3D [13] and KIT-ML [48] datasets quantitatively validates our joint control ability, and the user study on generated interactions shows a clear preference over previous methods.\nTo summarize, our contributions are twofold: (1) We are the first to generate multi-person interactions with a single-person motion generation model. (2) We are the first to perform precise spatial control of every joint in every person at any time for interaction generation." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Human Motion Generation", "publication_ref": [ "b5", "b12", "b13", "b28", "b45", "b54", "b69", "b14", "b44", "b9", "b18", "b54", "b33", "b34", "b55", "b2", "b17", "b26", "b27", "b48", "b28", "b57", "b54", "b5", "b30", "b66", "b50", "b54", "b63", "b8", "b68" ], "table_ref": [], "text": "Synthesizing human motions is a long-standing topic. Previous efforts integrate extensive multimodal data as condition to facilitate conditional human motion generation, including text [6,13,14,29,47,56,71], action label [15,46], part of motion [10,19,56], music [34,35,57], speech [3,18] and trajectory [27,28,50]. As texts are free-form information that convey rich semantics, recent progress in motion generation are mainly based on text conditions. For example, FLAME [29] introduces transformer [59] to process variable-length motion data and language description. MDM [56] introduces the diffusion model and uses classifier-free guidance for text-conditioned motion generation. MLD [6] further incorporates a VAE [31] to encode motions into vectors and makes the diffusion process in the latent space. Physdiff [68] integrates physical simulators as constraints in the diffusion process to make the generated motion physically plausible and reduce artifacts. PriorMDM [52] treats pretrained MDM [56] as a generative prior and controls MDM by motion inpainting. Our InterControl also use a pretrained MDM, yet we further train a Motion ControlNet instead of using inpainting. A concurrent work OmniControl [65] also incorporate classifier guidance [9] and controlnet [70] modules to control all joints in MDM, yet it focuses on singleperson motion generation and does not investigate human interaction generation." }, { "figure_ref": [], "heading": "Human-related Interaction Generation.", "publication_ref": [ "b29", "b56", "b70", "b11", "b25", "b32", "b53", "b64", "b19", "b59", "b60", "b61", "b62", "b71", "b64", "b62", "b41", "b42", "b58", "b65", "b35" ], "table_ref": [], "text": "As human motions could be affected or interacted by surrounding humans [30,58,72], objects [12,26,33,55,66] and scenes [20,[61][62][63][64]73], generating interactions is also an important topic. Previous methods are mainly about human-scene/object interaction. For example, Interdiff [66] uses the contact point of human joints and objects as the root to generate object motions. UniHSI [64] exploits LLM to generate contact steps between human joints and scene parts as an action plan and control the agent perform the plan via reinforcement learning. As previous human-human interactions datasets [42,43] only contains very few multi-person sequences, previous human-human interaction methods [60,67] are mainly limited to unsupervised motion completion without texts. Recently, InterHuman dataset [36] is proposed for text-conditioned multi-person interaction generation, yet it only consider the two-person situation and is not able to model more people's interaction. To the best of our knowledge, we are the first to enable a single-person text-conditioned motion generation model to perform interactions between a group of people by controlling diverse joints of each person." }, { "figure_ref": [], "heading": "Controllable Diffusion Models", "publication_ref": [ "b8", "b23", "b49", "b52", "b10", "b16", "b21", "b31", "b0", "b6", "b7", "b50", "b8", "b23", "b68", "b68", "b54" ], "table_ref": [], "text": "Diffusion-based generative models have achieved great progress in generating various modalities, such as image [9,24,51,54], video [11,17,22] and audio [32]. Conditions and controlling ability in diffusion models are also well studied: (1) Inpainting-based methods [7,8] predict part of the data with the observed parts as condition and rely on diffusion model to generate consistent output, which is used in PriorMDM [52]. (2) Classifier-guidance [9] trains a separate classifier and exploits the gradient of classifier to guide the diffusion process. Our InterControl inherits the spirit of classifier-guidance, yet our guidance is provided by Inverse Kinematics (IK) and no classifier is needed. (3) Classifier-free guidance [24] trains a conditional and an unconditional diffusion model simultaneously and trade-off its quality and diversity by setting weights. (4) ControlNet [70] introduces a trainable copy of pretrained diffusion model to process the condition and freezes the original model to avoid degeneration of generation ability. It enables diverse types of dense control signals for various purpose with minimal finetuning effort. Our InterControl also incorporate the idea of ControlNet [70] to finetune the pretrained MDM [56] to process spatial control signals and imporve the quality of generated motions after joint control." }, { "figure_ref": [], "heading": "InterControl", "publication_ref": [], "table_ref": [], "text": "InterControl aims to generate interactions with only single-person motion data by precisely controlling every joint of every person at any time, conditioned on text prompts and joint relations. We first formulate interaction generation in Sec. 3.1, and then introduce control modules for a single-person motion diffusion model in Sec. 3.3 and Sec. 3.4. Finally we show details to generate interactions from our model in Sec. 3.5." }, { "figure_ref": [], "heading": "Formulation of Interaction Generation", "publication_ref": [ "b62", "b37", "b12" ], "table_ref": [], "text": "Inspired by human-scene interaction [64], we define human interactions as joint contact pairs C = {S 1 , S 2 , . . .}, where S i is the i th contact step. Taking twoperson interaction as an example, each step S has several contact pairs S = j 1 1 , j k is the joint of person 2, t s k and t e k means the start and end frame of the interaction, c k means contact type from {contact, avoid} to pull or push the joint pairs, d k is the desired distance in the interaction. By converting the contact pairs S to the mask m and distance d, and taking others' joint positions as condition, we could guide the multi-person motion generation process to interact between joints in the form of spatial distance. In this way, interaction generation is transformed to be controllable single-person motion generation taking a text prompt p and a spatial control signal c ∈ R N ×J×3 as input. Its goal is to predict motion sequence x ∈ R N ×D whose joints in the global space is aligned with spatial control c, where N is number of frames, J is number of joints (e.g., 24 in SMPL [38]), and D is the dimension of relative joint representations (e.g., 263 in HumanML3D [13]). It is non-trivial to incorporate spatial control in motion generation due to the discrepancy between the relative x and global c." }, { "figure_ref": [], "heading": "Human Motion Diffusion Model (MDM)", "publication_ref": [ "b12", "b5", "b50", "b54", "b66", "b12", "b50", "b54", "b26", "b8", "b23", "b49", "b52", "b68", "b54", "b23", "b57" ], "table_ref": [], "text": "Relative Motion Representation. HumanML3D [13] dataset proposes a widelyused [6,52,56,68] relative motion representation, and is proved to be easier to learn realistic motions, as the semantics of human motion is independent of global positions. It consists of root joint velocity, other joints' positions, velocities and rotations in the root space, and foot contact labels. To convert it to the global space, root velocities are aggregated, then other joints will be computed based on root. Please refer to Sec. 5 of HumanML3D [13] for details. Due to such discrepancy, previous inpainting-based methods [52,56] is not able to control MDM in global space. GMD [27] decouples motion generation to two separated generation process of root trajectory and pose relative to root, yet it can only control root joint. Directly adopting global joint positions to generate motions yields unnatural human poses, such as unrealistic limb lengths. Diffusion Process in MDM. Motivated by the success of image diffusion models [9,24,51,54,70], Motion Diffusion Model (MDM) [56] is proposed to synthesize sequence-level human motions conditioned on texts p via classifierfree guidance [24]. The diffusion process is modeled as a noising Markov process\nq (x t | x t-1 ) = N √ α t x t-1 , (1 -α t ) I\n, where α t ∈ (0, 1) are small constant hyper-parameters, thus x T ∼ N (0, I) if α t is small enough. Here x t ∈ R N ×D is the entire motion sequence at denoising time-step t, and there are T time-steps in total. Thus, x 0 is the clean motion sequence, and x T is almost a random noise to be sampled. The denoising Markov process is defined as\np θ (x t-1 | x t , p) = N (µ θ (x t , t, p), (1 -α t ) I)\n, where µ θ (x t , t, p) is the estimated posterior mean for the t -1 step from a neural network based on the input x t and θ is its parameters. Following MDM, we predict the clean motion x 0 (x t , t, p; θ) instead of the noise ϵ via a transformer [59], and the posterior mean µ θ (x t , t, p) is\nµ θ (x t , t, p) = √ ᾱt-1 β t 1 -ᾱt x 0 (x t , t, p; θ) + √ α t (1 -ᾱt-1 ) 1 -ᾱt x t ,(1)\nwhere\nβ t = 1 -α t and ᾱt = t s=0 α s . MDM's parameter θ is trained by min- imizing the ℓ 2 -loss ∥x 0 (x t , t, p; θ) -x * 0 ∥ 2 2\nwhere x * 0 is the ground-truth motion and x 0 (x t , t, p; θ) is MDM's prediction of x 0 at denoising timestep t." }, { "figure_ref": [], "heading": "Motion ControlNet for MDM", "publication_ref": [ "b68" ], "table_ref": [], "text": "As MDM is initially conditioned on texts p, it requires fine-tuning to accommodate spatial conditions c. This is challenging due to the potential sparsity of c across temporal and joint dimensions: (1) Control may be required for only a few joints, necessitating adaptive adjustment of the remaining joints to preserve realistic motion. (2) Control may be desired for only a select few frames, thus the model must interpolate natural human motions for the rest of the sequence.\nInspired by ControlNet [70], we introduce Motion ControlNet to generate realistic and high-fidelity motions guided by condition c. It is a trainable copy of MDM, while MDM is frozen in our training process. Each transformer encoder layer in ControlNet is connected to its MDM counterpart via a zero-initialized Interaction description: Two people are fighting with each other." }, { "figure_ref": [], "heading": "LLM Planner", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "IK Guidance", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Spatial Control", "publication_ref": [], "table_ref": [], "text": "Posterior Mean" }, { "figure_ref": [], "heading": "L-BFGS( )", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Optimize k times", "publication_ref": [], "table_ref": [], "text": "Contact Plan n: Text for single person: A person is fighting with his fists and foots like in martial arts. Contact Joint pairs: using hands to hit the other's head in many timesteps." }, { "figure_ref": [], "heading": "Motion ControlNet", "publication_ref": [], "table_ref": [], "text": "Spatial Control" }, { "figure_ref": [], "heading": "Trainable Copy of MDM", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Gaussian Noise", "publication_ref": [], "table_ref": [], "text": "Transformer Layer " }, { "figure_ref": [], "heading": "Motion Diffusion Model", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Inverse Kinematics (IK) Guidance", "publication_ref": [ "b8", "b8", "b51", "b36", "b8" ], "table_ref": [], "text": "While Motion ControlNet can adapt joint positions according to sparse conditions, the alignment between predicted poses and global spatial conditions often lacks precision. As Inverse Kinematics (IK) is a classic method for optimizing joint rotations to achieve specific global positions, we employ it to guide the diffusion process towards spatial conditions at test time in a classifier guidance [9] manner, named IK guidance.\nIK Guidance on general form of losses. Inspired by classifier guidance [9] and loss-guided diffusion [53], we employ losses in the global space to steer the denoising process. IK guidance accommodates various forms of distance measurements, enabling both minimization and maximization for flexible control over joint interactions, such as attraction or repulsion. Given the global position c ∈ R N ×J×3 , the distance between a joint and condition is\nd nj = ∥c nj -R(µ t ) nj ∥ 2 ,\nwhere µ t is short for µ θ (x t , t, p) mentioned in Sec. 3.2, and R(•) is forward kinematics (FK). To allow the interaction of joints with some given distances\nd ′ ∈ R N ×J×3 , loss of one joint is l nj = ReLU d nj -d ′ nj\nto make the joint and condition be contacted within distance d ′ nj ; and it is l nj = ReLU d ′ nj -d nj to make the joint and condition be far away, where ReLU is a function to keep values ≥ 0 and set values ≤ 0 to 0. Finally, with a binary mask m ∈ {0, 1} N ×J×3 , the total loss for all joints and frames is\nL(µ t , c) = n j m nj • l nj n j m nj ,(2)\nAs ℓ 2 -loss and FK are highly differentiable, we optimize L(µ t , c) in Equ. 2 w.r.t µ t using the second-order optimizer L-BFGS [37], which is commonly used in Inverse Kinematics, rather than first-order gradient methods. Classifier guidance [9] utilizes a pre-trained image classifier to direct the diffusion towards a target image class by the gradient ∇ xt log f ϕ (y | x t ), where f ϕ is the classifier, y is image class. Unlike this method, we do not rely on a large neural network classifier. L-BFGS has been demonstrated to better align global positions and offer quicker convergence than first-order methods. We update the posterior mean µ t using L-BFGS for k iterations at each denoising step, where k is a hyper-parameter. This optimization facilitates both pull and push types of IK guidance, corresponding to two contact types in our interaction model. To maintain consistency in data distribution between training and inference, we also apply IK guidance when training ControlNet. Additionally, employing IK guidance on x 0 eliminates the need for training Motion ControlNet, thus enhancing training efficiency. In practice, using L-BFGS on both x 0 and µ t can yield satisfactory joint and spatial condition alignment. Detailed algorithms for x 0 , µ t , and interactions are presented in Appendix A.1.\nAs the root position at frame n is derived from cumulative root velocities up to frame n in FK, a single condition at frame n can influence all preceding root positions. This effect also extends to non-root joints, as their global positions are calculated from the root. Consequently, IK guidance can adaptively modify velocities from the start to frame n to meet the condition at frame n. Moreover, IK guidance can control any combination of human joints, frames or XYZ-dims, such as controlling the left hand and right foot at a specific frame n." }, { "figure_ref": [], "heading": "Interaction Generation", "publication_ref": [ "b68", "b12" ], "table_ref": [], "text": "Inverse Kinematics (IK) guidance can optimize various distance measures to facilitate interactions such as avoiding obstacles, preventing collisions, facilitating face-to-face engagements, or enabling joint contacts between individuals. This method allows for intricate interactions among any human joints for an indefinite number of people, despite being trained exclusively on single-person data. As delineated in Section 3.1, we characterize interactions as pairs of contacting joints. A notable feature of our IK guidance in generating interactions is that both terms of the IK guidance loss function are predicted, allowing for simultaneous optimization within a single process. Specifically, the single-person loss L single (µt, c) transforms into L multi (µ a t , µ b t ) for interactions, where a and b represent two individuals. The L-BFGS optimizer concurrently optimizes both participants by minimizing L multi (µ a t , µ b t ), with µ a t and µ b t being the respective joints engaged in interaction. Beyond distance measures, our IK guidance can optimize orientation measures as well. For example, one can calculate a person's orientation through the spatial relationship of their joints, like the cross-product of vectors from the left shoulder to the right and from the pelvis to the head. By setting two individuals' unit orientation vectors to 0, they can face each other or turn away. To ensure they face each other, we can further adjust the relation between one person's orientation vector and the vector from their head to the other's. Such orientation relationships are vital for producing realistic interactions when we only exploit single-person motion generation ability and can be easily expanded to include larger groups. Another useful strategy in IK guidance is to prevent collision through joint separation pairs, ensuring that the torso joints of two people (such as pelvis, hips, and spines) maintain a certain distance, thereby reducing the likelihood of collisions when other joints are in contact. Besides, we can also regulate the motion region by confining the root joints within the XZ-plane using IK guidance. For the PyTorch-like code illustrating loss functions that enforce joint contacts, separations, or orientational alignment, please refer to Appendix A.1.\nIn our framework, interaction generation is realized by using joint-joint contact pairs as control signals. These pairs can be manually crafted by users to create desired interactions, akin to utilizing ControlNet [70] in image generation. However, manually constructing joint contact pairs can be tedious, so we employ an automatic off-the-shelf GPT-4 [44] as a planner. GPT-4 infers text prompts that describe the actions of multiple people, p multi , and converts them into single-person prompts, p, and contact plans, C, through prompt engineering. The inputs for the LLM Planner include the multi-person sentences p multi , background scenario details B, human joint data J , and predefined instructions, rules, and examples. Specifically, B encompasses the number of individuals, total motion sequence frames, and video playback speed; J contains names of all joints (for example, the 22 joint names in HumanML3D [13]); and the rules outline the joint contact pair format and guide the LLM to generate feasible contacts and timesteps. Our method, we everages the pre-trained capabilities of GPT-4 to comprehend human joint relationships from interaction descriptions via prompt engineering without any fine-tuning. Thus, the inference process of our model is not related to LLMs, making our comparison with other methods be fair. Please refer to Appendix A.3 for details of prompts and contact plans." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b12", "b46", "b54", "b40", "b14", "b12", "b26", "b12", "b26", "b54", "b54", "b47", "b36", "b38" ], "table_ref": [], "text": "Datasets. We conduct experiments on HumanML3D [13] and KIT-ML [48] following MDM [56]. HumanML3D contains 14,646 high-quality human motion sequences from AMASS [41] and HumanAct12 [15], while KIT-ML contains 3,911 motion sequences with more noises. Evaluation Protocol. We adopt metrics suggested by Guo et. al. [13] to evaluate the quality of alignment between text and motion, which are Frechet Inception Distance (FID), R-Precision, and Diversity. We also report metrics related to spatial controls following GMD [27] on HumanML3D dataset, which are Foot skating ratio, Trajectory error, Location error and Average error. Please refer to Appendix B.2 or papers [13,27] for more details. Implementation Details. We initialize parameters of both original MDM and Motion ControlNet from pretrained MDM [56] weight and freeze the parameters of original MDM during training. Following MDM [56], we use CLIP [49] model to encode text prompts. We run L-BFGS [37] in IK guidance 5 times for the first 990 denoising steps and 10 times for the last 10 denoising steps on the posterior mean µ t ; and once for the first 990 steps and 10 times for the last 10 steps on clean motion x 0 . We use IK guidance in training ControlNet when using it on µ t . We set two types of mask m ∈ {0, 1} N ×J×3 : (1) Only keeps pelvis (root) joint for root control to fairly compare with previous methods;\n(2) Randomly keep one joint in each iteration to learn to control all joints for interaction generation. Each type of mask will be used in both training and inference for consistency. Thus, we get two model weights, where (1) could be fairly compared with previous methods and we use (2) for interaction generation. We use AdamW [39] optimizer and set the learning rate as 1e-5." }, { "figure_ref": [], "heading": "Single-Person Motion Generation", "publication_ref": [ "b12", "b67", "b24", "b5", "b54", "b26", "b50", "b63", "b26", "b50", "b63", "b54", "b26", "b50", "b26", "b50", "b63", "b63" ], "table_ref": [], "text": "Text-conditioned motion generation. To generally compare our InterControl with previous text-conditioned motion generation methods, we report the alignment quality of text and generated motions suggested by Guo et. al. [13] in Tab. 1. Note that methods in the upper part of both tables are unable to perform spatial control, thus they are incapable of generating controllable motions and interactions even if they have lower FID or higher R-precision. For instance, T2M-GPT [69] and MotionGPT [25] tokenize human poses to discrete tokens and is unable to incorporate any spatial information. MLD [6] uses latent diffusion to accelerate denoising steps, yet performing spatial control needs to convert each step of latent feature back to motion representations. It leads to much more computation than MDM [56] and is opposite to MLD's motivation of latent diffusion. Among methods that are suitable for spatial control [27,52] in Tab. 1, InterControl achieves the best performance in most of semantic-level metrics, and is better than the recent work OmniControl [65] that focuses on single-person motion yet shares similar design of spatial controlling with us. Spatially controllable motion generation. In Tab. 2, we compare Inter-Control with other spatially controllable methods [27,52,65]. We also include results of MDM [56] to show the controlling metrics [27] without spatial control.MDM's trajectory can significantly deviate from the intended path in the absence of control signals, with an average error often exceeding 1m. In contrast, inpainting-based control, unaware of global spatial information, results in considerable divergence, as seen with PriorMDM [52]. GMD [27] decouples this problem and generates root trajectories in the global space, so it achieves better performance in spatial control metrics. However, its limitation to only the root joint constrains its spatial control and interaction capabilities. Our InterControl could achieve very small errors in spatial control metrics for all-joint control thanks to the power of Inverse Kinematics and L-BFGS optimizer. Meanwhile, Motion ControlNet could ensure the motion data is still in the same distribution with the training set by adapting to the posterior mean updated by IK guidance in its training stage, leading to even better FID than previous methods. It is worth noting that we only use a single model to learn the control strategy for all joints, while previous method [52] needs to train separate models and blend them for multiple joints. Our method achieves similar performance with controlling one joint when extending it to control multiple joints (last two rows in Tab. 2). Compared to the recent concurrent work [65], we achieve significantly better FID and Traj./Loc. errors than it in both root joint control or random joint control. It [65] also shows a notable gap between two form of joint controls (0.310 vs. 0.218), while our method is more robust to joint variants (0.178 vs. 0.159) thanks to our special designs of more inputs in Motion ControlNet. Its R-precision and foot-skating ratio are slightly better than ours, we believe the reason is that their 1-st order optimization tolerates more errors when the joint alignment is hard. It is also supported by their worse Traj./Loc. yet better Avg. err., which means their method shows more outliers with large errors. However, their design need much more times of optimization compared to ours (e.g., 100 vs. 5) and leads to longer inference time than ours (120s vs. 80s)." }, { "figure_ref": [ "fig_0" ], "heading": "Multi-Person Interaction Generation", "publication_ref": [ "b50", "b35", "b12", "b46", "b50", "b39" ], "table_ref": [], "text": "To validate our model's interaction generation ability, we analyze the spatial control results in interaction scenarios and perform an user study to qualitatively compare our model with PriorMDM [52]. We also introduce an potential application of our interaction generation method for physics animation. Spatial Control. In Tab. 3 (left), we compare spatial-related metrics with Pri-orMDM in zero-shot human interaction generation. Specifically, we collect 100 descriptions of two-person actions from InterHuman Dataset [36] and let an off-the-shelf to adapt them to single-person motion descriptions and joint-joint contact pairs via prompt engineering. Then, we utilize an InterControl model pretrained on the HumanML3D dataset to generate human interactions Table 1: Text-to-motion evaluation on the (left) HumanML3D [13] and (right) KIT-ML [48] datasets. The right arrow → means closer to real data is better. Methods in the upper part are unable to perform spatial control. † means our implementation. conditioned on text prompts and joint contact pairs. The spatial-related metrics are reported over controlled joints and frames. InterControl achieves good performance of spatial errors in interaction scenarios, indicating its robustness in precise spatial control for multiple humans. In contrast, PriorMDM [52] could only take interaction descriptions as input and is not able to perform spatial control, leading to much larger spatial errors. User Study. We conduct a user study to qualitatively compare our method with PriorMDM on the text-conditioned two-person interaction generation. 134 unique users were participating in the user study, where each user will answer 19 single choice questions to compare our results with PriorMDM. Results in Tab. 3 (right) shows that our generated interactions are clearly preferred over PriorMDM by a percent of 80.4%. Please refer to Appendix B.1 for more details. Application: Our method is able to seamlessly integrate with off-the-shelf character simulation approaches, allowing us to synthesize physically plausible human reactions. As shown in Fig. 1 (c), our method synthesizes the motions, where the orange character is fighting with other two characters, as the reference of the SoTA physics-aware motion imitator [40]. The interactions of our motions are designed to hit heads of other characters with fists. Leveraging the precise spatial control provided by our approach, the animated characters in the simulator can accurately respond to these impacts, resulting in realistic reactions such as being knocked down. This capability to generate spatially coherent multi-human interactions enables our method to improve the plausibility and responsiveness of synthesized reactions within physics-based character animations." }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [ "b3", "b8", "b8", "b26", "b63", "b26" ], "table_ref": [], "text": "To further investigate the effectiveness of InterControl, we ablate our method in Tab. 4 and reveal some key information in controlling the motion generation model in the global space. Then we also analyze the computational costs of our method to ensure our control is efficient. We will refer to the variants of InterControl by row numbers in Tab. 4. All experiments are trained on all joints and evaluated with randomly selected joints to report average performance. Motion ControlNet. By dropping ControlNet, we find that IK guidance could still follow spatial controls with very low errors, yet the motion quality (e.g., FID) is significantly damaged (row 1 vs. row 2). Our ControlNet could adapt to the posterior distribution updated by IK guidance, and produce high-quality motion data. We also find that our c f inal provides key information in controlling all joints: For root control only, the FID of c f inal and c shows small difference. However, the FID of root control is always slightly better than all-joint control (∼ 0.07) when we use c, indicating insufficient information in all-joint control. We alleviate this by introducing extra information in c f inal for Motion ControlNet and improve the FID of all-joint control from 0.227 (row 3) to 0.178 (row 1). IK guidance. By dropping IK guidance, Motion ControlNet can produce good semantic-level metrics (e.g., FID) compared with MDM by using extra spatial cues (row 4). However, this variant will lead to more spatial errors and cannot strictly follow spatial controls in global space. As precise joint alignment is vital for interactions, IK guidance is important for our InterControl. Another variant [9]) yet it leads to slightly worse FID than using µ t . We believe the reason is that IK guidance still changes the data distribution in denoising steps even if it is updated on x 0 . Finally, we also report the result of 1-st order gradient in classifier guidance [9] (row 6) instead of L-BFGS. We find it takes more computations to achieve similar performance with L-BFGS, which is analyzed below. Inference time analysis. In practice, we find that IK guidance in last few denoising steps (e.g., t ∈ [0, 9]) is vital for precise joint control, while most denoising steps t ∈ [10, 999] are less important yet take most of computations. IK guidance on x 0 with only once L-BFGS in t ∈ [10, 999] and 10 times in t ∈ [0, 9] could leads FID 0.234 in controlling all joints, yet leads to minimal extra computations. We report its total inference time of 1000 denoising steps by adding sub-modules step-by-step in Tab. 5. GMD [27] needs 110s to run twostage diffusion models, while we only needs 80s. Gradient-based optimization in the recent work [65] needs 120s to achieve similar control quality. Thanks to GPU's parallel computing, generating a batch of 32 people in InterControl only needs 91s, enabling efficient group motion generation. Sparse control signals in temporal. As a key challenge of spatial control is the sparsity, we also report results with sparsely selected frames as control (sparsity = 0.25 and 0.025) in Tab. 4 (row 7 and 8). Our model shows consistent performance of both spatial errors and semantic-level metrics on sparse signals, e.g., FID 0.255 and avg. err. 0.0467 with sparsity 0.025, while GMD [27] achieves FID 0.523 and avg. err. 0.139 with the same sparsity." }, { "figure_ref": [], "heading": "Conclusion and Limitations", "publication_ref": [], "table_ref": [], "text": "We presented InterControl, a multi-person interaction generation method that is only trained on single-person motion data. It could generate interactive human motions of an arbitrary number of people. We achieve this by enabling a text-conditioned motion generation model with the ability to control every joint of every person at any time. We propose two complementary modules, named Motion ControlNet and IK guidance, to improve both the spatial alignment between joints and desired positions, and the overall quality of whole motions.\nExtensive experiments are conducted on HumanML3D and KIT-ML benchmarks to validate the effectiveness and efficiency of our proposed modules. We enable InterControl the ability of text-conditioned interaction generation by leveraging the knowledge of LLMs. Qualitative results and user study validate that Inter-Control could generate high-quality interactions by precise spatial joint control.\nLimitations. As InterControl is not trained on multi-person data, its definition of interaction is based on distances (being contacted or separated) or orientations. Its motion quality is from model diffusion models pretrained on singleperson motion data, and the plausibility of interactions is from the knowledge of LLMs, i.e., to what extent the joint contact pairs are consistent to the semantics of interaction descriptions. Yet, InterControl could generate interactions of an arbitrary number of people. 1: x T ∼ N (0, I) 2: for t from T to 1 do 3:\n# Motion ControlNet" }, { "figure_ref": [], "heading": "4:", "publication_ref": [], "table_ref": [], "text": "{f } ← C xt, t, p, c f inal ; ϕ" }, { "figure_ref": [], "heading": "5:", "publication_ref": [], "table_ref": [], "text": "# Motion Diffusion Model" }, { "figure_ref": [], "heading": "6:", "publication_ref": [], "table_ref": [], "text": "x0 ← M (xt, t, p, {f }; θ)" }, { "figure_ref": [], "heading": "7:", "publication_ref": [], "table_ref": [], "text": "for k from 1 to K do 8:\nx0 ← L-BFGS(L(x0, c)) # IK guidance 9:\nend for 10: 1: x T ∼ N (0, I) 2: for t from T to 1 do 3:\nµ t , Σt ← µ (x0, xt) , Σt # Posterior 11: xt-1 ∼ N (µ t ,\n# Motion ControlNet 4: {f } ← C xt, t, p, c f inal ; ϕ 5: # Motion Diffusion Model 6: x0 ← M (xt, t, p, {f }; θ) 7: µ t , Σt ← µ (x0, xt) , Σt # Posterior 8:\nfor k from 1 to K do 9:\nµ t ← L-BFGS(L(µ t , c)) # IK guidance 10:\nend for" }, { "figure_ref": [], "heading": "11:", "publication_ref": [ "b36", "b36" ], "table_ref": [], "text": "xt-1 ∼ N (µ t , Σt)\n12: end for 13: return x0 times of L-BFGS [37], which means faster inference speed. µ t leads to better FID in controlling all joints, yet it requires more times of L-BFGS [37] and also need IK guidance in training Motion ControlNet. We show the pseudo-code of InterControl with IK guidance on x 0 in Algorithm 1, and IK guidance on µ t in Algorithm 2. We show the pseudo-code of InterControl in interaction generation in Algorithm 3. Besides, we also show examples of PyTorch-like codes to implement collision avoidance (1.1) and making two people face each other (1.2). By putting such loss function in IK guidance to L-BFGS optimizers, desired orientations or joint distances in motion interactions could be achieved." }, { "figure_ref": [], "heading": "§ ¤", "publication_ref": [ "b0", "b1", "b2", "b5", "b8", "b11", "b12", "b13", "b14" ], "table_ref": [], "text": "def avoid_collision_loss(a_joint, b_joint): \"\"\" Calculate a loss to avoid collision between two people based on the distance between their torso joints.\nArgs: a_joint and b_joint (torch.Tensor): Joint locations of person A in SMPL format, shape [bs, njoints, 3, seqlen]. \"\"\" torso_id = [0, 1,2,3,6,9,12,13,14,15] Algorithm 3 Two-people interaction model inference Require: a Motion Diffusion Model M with parameter θ, a Motion ControlNet C with parameter ϕ, interaction prompts p multi , number of L-BFGS K, Forward Kinematics operation FK, masked selection operation S.\n1: x a T , x b T ∼ N (0, I)\n2: for t from T to 1 do 3:\n# LLM-Planner" }, { "figure_ref": [], "heading": "4:", "publication_ref": [], "table_ref": [], "text": "p a , p b , mask ← LLM(p multi )" }, { "figure_ref": [], "heading": "5:", "publication_ref": [], "table_ref": [], "text": "# Copy Spatial Condition from Each Other" }, { "figure_ref": [], "heading": "6:", "publication_ref": [], "table_ref": [], "text": "c a ← S(FK(x b t ), mask)" }, { "figure_ref": [], "heading": "7:", "publication_ref": [], "table_ref": [], "text": "c b ← S(FK(x a t ), mask)" }, { "figure_ref": [], "heading": "8:", "publication_ref": [], "table_ref": [], "text": "# Motion ControlNet" }, { "figure_ref": [], "heading": "9:", "publication_ref": [], "table_ref": [], "text": "{f } a ← C (x a t , t, p a , c a ; ϕ) 10:\n{f } b ← C x b t , t, p b , c b ; ϕ 11:\n# Motion Diffusion Model" }, { "figure_ref": [], "heading": "12:", "publication_ref": [], "table_ref": [], "text": "x a 0 ← M (x a t , t, p a , {f } a ; θ)\n13:\nx b 0 ← M x b t , t, p b , {f } b ; θ 14: µ a t , Σt ← µ (x a 0 , x a t ) , Σt # Posterior\n15:\nµ b t , Σt ← µ x b 0 , x b t , Σt # Posterior" }, { "figure_ref": [], "heading": "16:", "publication_ref": [], "table_ref": [], "text": "for k from 1 to K do" }, { "figure_ref": [], "heading": "17:", "publication_ref": [ "b26" ], "table_ref": [], "text": "# IK guidance 18:\nµ a t , µ b t ← L-BFGS(L(µ a t , µ b t ))\n19:\nend for 20:\nx a t-1 ∼ N (µ a t , Σt)\n21: will generate 10 task plans for us, as shown in Tab. 8. We manually correct typos of task plans generated by LLM, such as typos of joint name, invalid joint name, or invalid start frame or end frame. It leads to 989 valid task plans. Finally, we write Python scripts to transform the natural language tasks plans to Python Json format, as shown in Tab. 9. We take single-person language prompts in task plans as texts for motion diffusion model, and transform information in 'steps' to joint contact masks in the spatial condition. Specifically, we update the other person's joint positions as the current person's spatial condition in each denoising step, and use the spatial condition to guide Motion ControlNet and IK guidance in the same way with single-person scenarios. We evaluate the quality of interactions by using metrics like trajectory error and average error proposed by GMD [27] in the same way with single-person scenarios. We only evaluate on joints and frames in the joint-joint contact pairs. The result on our collected 989 task plans is shown in Tab. 5 in the main paper.\nx b t-1 ∼ N (µ b t , Σt)" }, { "figure_ref": [], "heading": "B Additional Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_4" ], "heading": "B.1 Details of User Study", "publication_ref": [ "b50" ], "table_ref": [], "text": "In the user study, our method generates 50 samples from the contact plans collected from LLM-planner. We also use the original interaction description to generate two-person interactions from ComMDM in PriorMDM [52]. In Fig. 4, we show our designed questionnaire's evaluation instructions and the first question as an example. Each questionnaire has 19 single choice questions randomly sampled from all samples. In the folder named 'user-study-videos', we provide 25 videos sampled from our Intercontrol and PriorMDM for reference." }, { "figure_ref": [], "heading": "B.2 Details of Evaluation Metrics", "publication_ref": [ "b12", "b26", "b12", "b26" ], "table_ref": [], "text": "Here we select some descriptions for metrics used to evaluate controllable motion generation methods from HumanML3D [13] and GMD [27] to save reader's time.\nSemantic-level Evaluation Metrics from HumanML3D [13]: Frechet Inception Distance (FID), diversity and multi-modality. For quantitative evaluation, a motion feature extractor and text feature extractor is trained under contrastive loss to produce geometrically close feature vectors for matched textmotion pairs, and vice versa. Further explanations of aforementioned metrics as well as the specific textual and motion feature extractor are relegated to the supplementary file due to space limit. In addition, the R-precision and Multi-Modal distance are proposed in this work as complementary metrics, as follows. Consider R-precision: for each generated motion, its ground-truth text description and 31 randomly selected mismatched descriptions from the test set form a description pool. This is followed by calculating and ranking the Euclidean distances between the motion feature and the text feature of each description in the pool. We then count the average accuracy at top-1, top-2 and top-3 places. The ground truth entry falling into the top-k candidates is treated as successful retrieval, otherwise it fails. Meanwhile, MultiModal distance is computed as the average Euclidean distance between the motion feature of each generated motion and the text feature of its corresponding description in test set.\nSpatial-level Evaluation Metrics from GMD [27]: We use Trajectory diversity, Trajectory error, Location error, and Average error of keyframe locations. Trajectory diversity measures the root mean square distance of each lo-cation of each motion step from the average location of that motion step across multiple samples with the same settings. Trajectory error is the ratio of unsuccessful trajectories, defined as those with any keyframe location error exceeding a threshold. Location error is the ratio of keyframe locations that are not reached within a threshold distance. Average error measures the mean distance between the generated motion locations and the keyframe locations measured at the keyframe motion steps." }, { "figure_ref": [], "heading": "B.3 More Single-joint Control Results", "publication_ref": [ "b63" ], "table_ref": [], "text": "In Tab. 2 of our main paper, we have shown the spatial control results with root joint and randomly selected one/two/three joints. Following the recent work [65], we also show the spatial control performance on specific joints in Tab. 6. We find that feet and hands are more difficult to control due to their flexibility, while root (pelvis) and head are more easier to follow, leading to better FID and R-precision. " }, { "figure_ref": [], "heading": "Input", "publication_ref": [], "table_ref": [], "text": "Instruction: two people greet each other with a handshake, while holding their cards in the left hand. Given the instruction, generate 10 task plans according to the following background information, rules, and examples. Each task plan should completely reflect an entire process of actions described in the instruction.\n[start of background Information [ Human has JOINTS: ['pelvis', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist' [. The total number of TIME-STEPS of human motion is 99, the frame-per-second of motion is 20.\nThe provided text instruction is describing two people performing some actions containing human joint contacts. The height of all people is 1.8 meters, the arm length is 0.6 meters, and the leg length is 0.9 meters. Two people are 2 meters away at the beginning (i.e., TIME-STEPS=0).\n[end of background Information] [start of rules] 1. Each task plan should be composite into detailed steps. 2. Each step should contain meaningful joint-joint pairs. 3. Each joint-joint pair should be formatted into {JOINT, JOINT, TIME-STEP, TIME-STEP, CON-TACT TYPE, DISTANCE}. JOINT should be replaced by JOINT in the background information. IMPORTANT: The first JOINT belongs to person 1, and the second JOINT belongs to person 2. Each joint-joint pair represents a contact of a joint of person 1 and a joint of person 2. The first TIME-STEP is the start frame number of contact, and the second TIME-STEP is the end frame number of contact. CONTACT TYPE should be selected from {contact, avoid}, DISTANCE should be a float number representing how many meters should be the distance of two joints in the jointjoint pair. 4. Consider which JOINT will be interacted when two people perform the action described in the text instruction. Translate the text instruction to be steps of joint-joint pairs. Do not include extra joint-joint pairs that is unrelated to the text instruction. IMPORTANT: make joint-joint pairs in different task plans diverse in TIME-STEPS and JOINTs. Each joint-joint contact pairs should be lasting from 3 to 10 frames. 5. Be plausible. Do not generate uncommon interactions. Generate plausible interaction time-steps, and consider the velocity of human motions. 6. Use one sentence to describe what action should person 1 do and one sentence to describe what action should person 2 do according to the text instruction at the beginning of the task plan. IM-PORTANT: the sentence starts from 'text 1:' describing the action of person 1 from the perspective of person 1 and the sentence starts from 'text 2:' describing the action of person 2 from the perspective of person 2. Sentences should NOT contain words like 'person 1' or 'person 2', use 'a person' to refer to himself in the sentence and 'others' to refer to others. 7. The steps in the task plan are for both two people. Use one set of steps to describe both two people. The first JOINT belongs to person 1, and the second JOINT belongs to person 2. 8. IMPORTANT: Do NOT add explanations for the steps in task plans. Each step only have one joint-joint pairs. [end of rules] [start of an example] Instruction: two people greet each other with a handshake, while holding their cards in the left hand.\n[Start of Plan 1] Text 1: a person make a handshake with others using his right wrist, while holding his cards in the left wrist. Text 2: a person make a handshake with others using his right wrist, while holding his cards in the left wrist. Step 1: {right_foot, left_knee, 4, 12, contact, 0.31} Step 2: {left_wrist, right_shoulder, 20, 30, avoid, 0.3} Step 3: {right_wrist, head, 73, 81, contact, 0.05} [End of Plan 5] " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgment. This project is funded in part by Shanghai AI Laboratory (P23KS00020, 2022ZD0160201), CUHK Interdisciplinary AI Research Institute, and the Centre for Perceptual and Interactive Intelligence (CPII) Ltd under the Innovation and Technology Commission (ITC)'s InnoHK. We would like to thank Tianfan Xue for his insightful discussion." }, { "figure_ref": [], "heading": "A More Details about InterControl", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1 Pseudo-code of IK guidance", "publication_ref": [ "b20" ], "table_ref": [], "text": "Here we elaborate the details of IK guidance's algorithm. As we mentioned in the main paper, IK guidance could be performed on predicted clean motion (i.e., x 0 ) or posterior mean in denoising step t (i.e., µ t ). In practice, we find that x 0 works well in root control, and it does not require IK guidance in training Motion ControlNet, leading to faster training speed. Besides, it also requires less To process condition c, the uncontrolled joints, frames and XYZ-dim are masked as 0. Then we use a linear layer to project the condition c ∈ R N ×3J to the hidden dimension of transformer layers as c H ∈ R N ×D H , and feed c ′ to transformer encoder layers in ControlNet. We use a zero-initialized linear layer to link the output of each layer in ControlNet to the transformer encoder layer of pretrained and frozen MDM via a residual connection [21]. We use extra information as condition for Motion ControlNet c f inal = cat(c ′ , c ′′ , n s , n h ). The details of c f inal has been explained in Sec 3.3 in our main paper." }, { "figure_ref": [], "heading": "A.3 LLM-Planner", "publication_ref": [ "b35" ], "table_ref": [], "text": "In this section, we further elaborate the details of LLM Planner. Specifically, we collect 100 sentences describing human interactions with joint contacts from the description of InterHuman Dataset [36]. Then, we use a GPT-4 [44] with the prompt in Tab. 7 to let GPT-4 to produce joint-joint contact plans for us. For each collected sentence, we replace it as the instruction in the prompt, and LLM " } ]
kinematics guidance module that realistically and accurately aligns the joints of synthesized characters to the desired location. Furthermore, we demonstrate that the distance between joint pairs for human-wise interactions can be generated using an off-the-shelf Large Language Model (LLM). Experimental results highlight the capability of our framework to generate interactions with multiple human characters and its potential to work with off-the-shelf physics-based character simulators. Code is available at https://github.com/zhenzhiwang/intercontrol.
InterControl: Generating Human Motion Interactions by Controlling Every Joint
[ { "figure_caption": "Fig. 1 :1Fig. 1: InterControl is able to generate interactions of a group of people given joint-joint contact or separation pairs as spatial condition, and it is only trained on single-person data. Our generated interactions are realistic and similar to real interactions in internet images in (a) daily life and (b) fighting. (c) shows our generated group motions (red dots) could serve as reference motions for physics animation.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 :2Fig. 2: Overview. Our model could precisely control human joints in the global space via the Motion ControlNet and IK guidance module. By leveraging LLM to adapt interaction descriptions to joint contact pairs, it could generate multi-person interactions via a single-person motion generation model in a zero-shot manner.linear layer. This allows InterControl to commence training from a state equivalent to a pretrained MDM, acquiring a residual feature for c in each layer through back-propagation. To process c, the uncontrolled joints, frames, and XYZ-dim are masked as 0. We find that the vanilla c ∈ R N ×3J is effective enough to control the pelvis (root) joint, yet it is still sub-optimal for other joints. Thus, we design a relative condition indicating the distance from the current positions of each joint to c. Suppose R(•) is a forward kinematics (FK) to convert relative motion x ∈ R N ×D to global space R(x) ∈ R N ×J×3 , the relative condition is c ′ = c -R(x). To provide additional clues, we also use c ′′ = c -R(x) root to represent the distance from the current root to the desired position. We also use the normal of triangles (pelvis, left/right shoulder) n s and (pelvis, left/right hip) n h to represent the current orientation of human. The final condition to be fed to ControlNet is c f inal = cat(c ′ , c ′′ , n s , n h ), where cat is concatenation. Please refer to Appendix A.2 for its more details. Network Training. Motion ControlNet is the only part that needs finetuning in our framework, while IK guidance is an optimization method in the test time and the LLM in our framework is an off-the-shelf GPT-4 [44]. We adopt the standard ControlNet[70] training strategy, and the only difference is the data format: we first convert the relative motion to be global locations by FK, and then use random masks that keeps part of global joints to be non-zero as spatial control signals. The training objective is identical to MDM.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Algorithm 11Single-person model inference with x 0 Require: a Motion Diffusion Model M with parameter θ, a Motion ControlNet C with parameter ϕ, original spatial condition c and spatial condition with extra information c f inal , text prompts p, number of L-BFGS K.", "figure_data": "", "figure_id": "fig_2", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Listing 1 . 1 :11[:, torso_id.unsqueeze(1), dim_indices, :] b_torso_joints = b_joint[:, torso_id.unsqueeze(1), dim_indices, :] diff = a_torso_joints.unsqueeze(2) -b_torso_joints.unsqueeze(1) distance = torch.norm(diff, dim=3) collision_constraint = F.relu(0.4 -distance) loss = collision_constraint.mean(dim=[1, 2, 3]).mean() return loss ¦ ¥ Avoid Collision Loss § ¤ def face_to_face_loss(a_joint, b_joint): \"\"\" Calculate a loss to make two people face each other based on shoulders and hips orientation and distance. Args: a_joint and b_joint (torch.Tensor): Joint locations of person A, shape [bs, njoints, 3, seqlen]. \"\"\" l_hip, r_hip, l_shoulder, r_shoulder = [1, 2, 16, 17] a_shoulder_vec = a_joint[:, l_shoulder] -a_joint[:, r_shoulder] a_hip_vec = a_joint[:, l_hip] -a_joint[:, r_hip] b_shoulder_vec = b_joint[:, l_shoulder] -b_joint[:, r_shoulder] b_hip_vec = b_joint[:, l_hip] -b_joint[:, r_hip] a_front_vec = a_joint[:, l_shoulder] -a_joint[:, l_hip]", "figure_data": "", "figure_id": "fig_3", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig. 4: Example of the questionnaire of user-study.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "For [CONTACT TYPE: contact], the distance of two joints should be SMALLER than the DISTANCE; for [CONTACT TYPE: avoid], the distance of two joints should be LARGER than the DISTANCE. IMPORTANT: Consider the transition of contact types, leave time-steps more than 20 frames without any joint-joint pair between different contact types. Use small DISTANCE variance between different contact types: for the joint-joint pairs that are with [CONTACT TYPE: contact], do NOT use DISTANCE larger than 0.5m in the following [CONTACT TYPE: avoid]; for the jointjoint pairs that are with [CONTACT TYPE: contact], use [CONTACT TYPE: avoid] after 20 frames; for the joint-joint pairs that are with [CONTACT TYPE: avoid], use NO joint pairs for 20 frames if the following CONTACT TYPE is contact. Try to not over-use [CONTACT TYPE: avoid]: if there is no explicit semantics of being far away, just do not use joint-joint pair in that frames; if there is explicit semantics of being far away, then use joint-joint pair with [CONTACT TYPE: avoid].", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Step 1 :1{right wrist, right wrist, 0, 10, avoid, 0.3} Step 2: {right wrist, right wrist, 50, 60, contact, 0.05} Step 3: {right wrist, right wrist, 90, 100, avoid, 0.3} [End of Plan 1] [end of an example]", "figure_data": "", "figure_id": "fig_6", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "LLMGenerated Task Plans (only show 5 due to page limit) Instructions: The first fencer lunges at the second, who parries the attack and counters with a successful strike to the head.[Start of Plan1] Text 1: A person lunges towards another with his right foot. Text 2: A person parries the lunged attack while preparing to counter. Step 1: {right_foot, left_knee, 5, 10, contact, 0.3} Step 2: {right_wrist, left_collar, 20, 30, avoid, 0.3} Step 3: {left_elbow, head, 70, 80, contact, 0.05} [End of Plan 1] [Start of Plan 2] Text 1: A person lunges at the other person with his right foot. Text 2: A person blocks the lunged attack. Step 1: {right_foot, left_ankle, 3, 10, contact, 0.2} Step 2: {right_wrist, right_collar, 20, 30, avoid, 0.25} Step 3: {left_wrist, head, 70, 79, contact, 0.02} [End of Plan 2] [Start of Plan 3] Text 1: A person takes a lunge step towards another. Text 2: A person parries the attack and counters. Step 1: {right_foot, right_knee, 7, 14, contact, 0.3} Step 2: {left_wrist, right_collar, 22, 30, avoid, 0.25} Step 3: {right_wrist, head, 69, 77, contact, 0.03} [End of Plan 3] [Start of Plan 4] Text 1: A person lunges northerly towards another with his left foot. Text 2: A person parries the attack and prepares a counterattack. Step 1: {left_foot, right_ankle, 6, 10, contact, 0.35} Step 2: {left_wrist, left_collar, 22, 30, avoid, 0.28} Step 3: {right_elbow, head, 71, 80, contact, 0.05} [End of Plan 4] [Start of Plan 5] Text 1: A person lunges at another using his right foot. Text 2: A person deflects the approaching lunge and immediately counters.", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Spatial control results on HumanML3D[13]. → means closer to real data is better. Random One/Two/Three reports the average performance over 1/2/3 randomly selected joints in evaluation. † means our evaluation on their model.", "figure_data": "HumanML3DFID ↓ R-precision ↑Diversity →(Top-3)KIT-MLFID ↓ R-precision ↑Diversity →Real0.0020.7979.503(Top-3)JL2P [2]11.020.4867.676Real0.0310.77911.08Text2Gesture [4] 7.664 T2M [13] 1.067 MotionDiffuse [71] 0.630 MLD [6] 0.473 PhysDiff [68] 0.433 T2M-GPT [69] 0.116 MotionGPT [25] 0.2320.345 0.740 0.782 0.772 0.631 0.775 0.7786.409 9.188 9.410 9.724 -9.761 9.528T2M [13] MotionDiffuse [71] 1.954 3.022 MLD [6] 0.404 T2M-GPT [69] 0.514 MotionGPT [25] 0.510 MDM [56] 0.4970.681 0.739 0.734 0.745 0.680 0.39610.72 11.10 10.80 10.92 10.35 10.84MDM [56]0.5440.6119.446PriorMDM † [52]0.8300.39710.54PriorMDM [52]0.5400.6409.160GMD † [27]1.5370.3859.78GMD [27]0.2120.6709.440OmniControl [65] 0.7020.39710.93OmniControl [65] 0.2180.6879.422Our InterControl 0.5800.39710.88Our InterControl 0.1590.6719.482MethodJointFID ↓R-precision ↑ (Top-3)Diversity →Foot skating ratio ↓Traj. err. ↓ (50 cm)Loc. err. ↓ (50 cm)Avg. err.↓ (m)Real data-0.0020.7979.5030.00000.00000.00000.0000MDM [56]No Control 0.5440.6119.4460.09430.89090.60151.1843PriorMDM [52] †0.4980.5869.1670.09240.37260.22100.4552GMD [27] † OmniControl [65]Root0.276 0.2180.655 0.6879.245 9.4220.1108 0.05470.0987 0.03870.0356 0.00960.1457 0.0338Ours0.1590.6719.4820.07290.01320.00040.0496OmniControl [65] Random one Ours0.310 0.1780.693 0.6699.502 9.4980.0608 0.09680.0617 0.04030.0107 0.00310.0404 0.0741OursRandom two 0.1840.6709.4100.09480.04750.00300.0911OursRandom three 0.1990.6739.3520.09300.04870.00260.0969", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Evaluation on (left) spatial errors and (right) user preference in interactions. Spatial Errors Traj. err. (20 cm) ↓ Loc. err. (20 cm) ↓ Avg. err. (m) ↓", "figure_data": "User-study PreferencePriorMDM [52]0.69310.34870.6723PriorMDM [52] 19.6%Ours0.00820.00050.0084Ours80.4%", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation studies on the HumanML3D[13] dataset.", "figure_data": "ItemMethodFID ↓R-precision ↑ (Top-3)Diversity →Foot skating ratio ↓Traj. err. ↓ (50 cm)Loc. err. ↓ (50 cm)Avg. err.↓ (m)(1) Ours (random joint) 0.1780.6699.4980.09680.04030.00310.0741(2)w/o ControlNet0.9650.6219.2160.16240.08790.00590.1013(3)w/ original c0.2270.6569.5440.10040.06970.00420.0785(4)w/o IK guidance 0.1870.6649.5980.07040.85690.45530.6557(5) IK guidance on x0 0.2110.6689.3940.11640.09070.00880.0981(6) w/ 1-st order grad 0.1980.6689.4720.09870.08790.00960.0877(7)sparsity = 0.250.2480.6719.4420.08010.01060.00070.0546(8)sparsity = 0.025 0.2550.6639.5200.07050.00150.00010.0067", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Inference time analysis on a NVIDIA A100 GPU. is updating IK guidance on ControlNet's prediction x 0 (row 5), instead of the posterior mean µ t . Its advantage is faster training speed because IK guidance is no longer needed in training ControlNet (similar to classifier guidance", "figure_data": "Sub-Modules MDM + Control Module + Guidance t ∈ [10, 999] + Guidance t ∈ [0, 9]Time (s)39.157.376.580.1", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Single-person model inference with µ t Require: a Motion Diffusion Model M with parameter θ, a Motion ControlNet C with parameter ϕ, original spatial condition c and spatial condition with extra information c f inal , text prompts p, number of L-BFGS K.", "figure_data": "12: end for 13: return x0Σt)Algorithm 2", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Spatial control results on the HumanML3D[13] dataset. Ours (all) means the model is trained on one randomly selected joint among all joints in each iteration.", "figure_data": "MethodJointFID ↓R-precision ↑ (Top-3)Diversity →Foot skating ratio ↓Traj. err. ↓ (50 cm)Loc. err. ↓ (50 cm)Avg. err.↓ (m)Ours (all)Root0.1840.6729.3150.10440.03170.00180.0693Ours (all) Left foot 0.2420.6649.1840.10050.06960.00240.0671Ours (all) Right foot 0.2360.6699.2010.09830.07980.00290.0680Ours (all)Head0.1720.6789.3590.09580.05230.00440.0846Ours (all) Left wrist 0.2600.6608.9650.09150.03750.00120.0874Ours (all) Right wrist 0.2840.6559.0030.09200.03640.00100.0872", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Detailed prompting example of the LLM Planner.", "figure_data": "", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Example of the LLM generated task plans.", "figure_data": "", "figure_id": "tab_8", "figure_label": "8", "figure_type": "table" } ]
Zhenzhi Wang; Jingbo Wang; Yixuan Li; Dahua Lin; Bo Dai
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "Cmu graphics lab motion capture database", "year": "" }, { "authors": "C Ahuja; L P Morency", "journal": "IEEE", "ref_id": "b1", "title": "Language2pose: Natural language grounded pose forecasting", "year": "2019" }, { "authors": "T Ao; Q Gao; Y Lou; B Chen; L Liu", "journal": "ACM Trans. Graph", "ref_id": "b2", "title": "Rhythmic gesticulator: Rhythm-aware co-speech gesture synthesis with hierarchical neural embeddings", "year": "2022" }, { "authors": "U Bhattacharya; N Rewkowski; A Banerjee; P Guhan; A Bera; D Manocha", "journal": "", "ref_id": "b3", "title": "Text2gestures: A transformer-based network for generating emotive body gestures for virtual agents", "year": "2021" }, { "authors": "T B Brown; B Mann; N Ryder; M Subbiah; J Kaplan; P Dhariwal; A Neelakantan; P Shyam; G Sastry; A Askell; S Agarwal; A Herbert-Voss; G Krueger; T Henighan; R Child; A Ramesh; D M Ziegler; J Wu; C Winter; C Hesse; M Chen; E Sigler; M Litwin; S Gray; B Chess; J Clark; C Berner; S Mccandlish; A Radford; I Sutskever; D Amodei", "journal": "NeurIPS", "ref_id": "b4", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "X Chen; B Jiang; W Liu; Z Huang; B Fu; T Chen; G Yu", "journal": "CVPR", "ref_id": "b5", "title": "Executing your commands via motion diffusion in latent space", "year": "2023" }, { "authors": "J Choi; S Kim; Y Jeong; Y Gwon; S Yoon", "journal": "", "ref_id": "b6", "title": "ILVR: conditioning method for denoising diffusion probabilistic models", "year": "2021" }, { "authors": "H Chung; B Sim; D Ryu; J C Ye", "journal": "NeurIPS", "ref_id": "b7", "title": "Improving diffusion models for inverse problems using manifold constraints", "year": "2022" }, { "authors": "P Dhariwal; A Q Nichol", "journal": "NeurIPS", "ref_id": "b8", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "Y Duan; T Shi; Z Zou; Y Lin; Z Qian; B Zhang; Y Yuan", "journal": "", "ref_id": "b9", "title": "Single-shot motion completion with transformer", "year": "2021" }, { "authors": "P Esser; J Chiu; P Atighehchian; J Granskog; A Germanidis", "journal": "", "ref_id": "b10", "title": "Structure and content-guided video synthesis with diffusion models", "year": "2023" }, { "authors": "A Ghosh; R Dabral; V Golyanik; C Theobalt; P Slusallek", "journal": "Comput. Graph. Forum", "ref_id": "b11", "title": "Imos: Intentdriven full-body motion synthesis for human-object interactions", "year": "2023" }, { "authors": "C Guo; S Zou; X Zuo; S Wang; W Ji; X Li; L Cheng", "journal": "CVPR", "ref_id": "b12", "title": "Generating diverse and natural 3d human motions from text", "year": "2022" }, { "authors": "C Guo; X Zuo; S Wang; L Cheng", "journal": "ECCV", "ref_id": "b13", "title": "Tm2t: Stochastic and tokenized modeling for the reciprocal generation of 3d human motions and texts", "year": "2022" }, { "authors": "C Guo; X Zuo; S Wang; S Zou; Q Sun; A Deng; M Gong; L Cheng", "journal": "ACM MM", "ref_id": "b14", "title": "Action2motion: Conditioned generation of 3d human motions", "year": "2020" }, { "authors": "W Guo; X Bie; X Alameda-Pineda; F Moreno-Noguer", "journal": "CVPR", "ref_id": "b15", "title": "Multi-person extreme motion prediction", "year": "2022" }, { "authors": "Y Guo; C Yang; A Rao; Y Wang; Y Qiao; D Lin; B Dai", "journal": "", "ref_id": "b16", "title": "Animatediff: Animate your personalized text-to-image diffusion models without specific tuning", "year": "2023" }, { "authors": "I Habibie; M Elgharib; K Sarkar; A Abdullah; S Nyatsanga; M Neff; C Theobalt", "journal": "", "ref_id": "b17", "title": "A motion matching-based framework for controllable gesture synthesis from speech", "year": "2022" }, { "authors": "F G Harvey; M Yurick; D Nowrouzezahrai; C Pal", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b18", "title": "Robust motion inbetweening", "year": "2020" }, { "authors": "M Hassan; D Ceylan; R Villegas; J Saito; J Yang; Y Zhou; M J Black", "journal": "", "ref_id": "b19", "title": "Stochastic scene-aware motion prediction", "year": "2021" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "CVPR", "ref_id": "b20", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "J Ho; W Chan; C Saharia; J Whang; R Gao; A Gritsenko; D P Kingma; B Poole; M Norouzi; D J Fleet", "journal": "", "ref_id": "b21", "title": "Imagen video: High definition video generation with diffusion models", "year": "2022" }, { "authors": "J Ho; A Jain; P Abbeel", "journal": "NeurIPS", "ref_id": "b22", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "J Ho; T Salimans", "journal": "", "ref_id": "b23", "title": "Classifier-free diffusion guidance", "year": "2022" }, { "authors": "B Jiang; X Chen; W Liu; J Yu; G Yu; T Chen", "journal": "", "ref_id": "b24", "title": "Motiongpt: Human motion as a foreign language", "year": "2023" }, { "authors": "N Jiang; T Liu; Z Cao; J Cui; Y Chen; H Wang; Y Zhu; S Huang", "journal": "", "ref_id": "b25", "title": "Chairs: Towards full-body articulated human-object interaction", "year": "2022" }, { "authors": "K Karunratanakul; K Preechakul; S Suwajanakorn; S Tang", "journal": "CVPR", "ref_id": "b26", "title": "Guided motion diffusion for controllable human motion synthesis", "year": "2023" }, { "authors": "M Kaufmann; E Aksan; J Song; F Pece; R Ziegler; O Hilliges", "journal": "", "ref_id": "b27", "title": "Convolutional autoencoders for human motion infilling", "year": "2020" }, { "authors": "J Kim; J Kim; S Choi", "journal": "AAAI", "ref_id": "b28", "title": "FLAME: free-form language-based motion synthesis & editing", "year": "2023" }, { "authors": "J Kim; Y Seol; T Kwon", "journal": "Comput. Animat. Virtual Worlds", "ref_id": "b29", "title": "Interactive multi-character motion retargeting", "year": "2021" }, { "authors": "D P Kingma; M Welling", "journal": "ICLR", "ref_id": "b30", "title": "Auto-encoding variational bayes", "year": "2014" }, { "authors": "Z Kong; W Ping; J Huang; K Zhao; B Catanzaro", "journal": "", "ref_id": "b31", "title": "Diffwave: A versatile diffusion model for audio synthesis", "year": "2020" }, { "authors": "N Kulkarni; D Rempe; K Genova; A Kundu; J Johnson; D Fouhey; L Guibas", "journal": "", "ref_id": "b32", "title": "Nifty: Neural object interaction fields for guided human motion synthesis", "year": "2023" }, { "authors": "B Li; Y Zhao; S Zhelun; L Sheng", "journal": "AAAI", "ref_id": "b33", "title": "Danceformer: Music conditioned 3d dance generation with parametric motion transformer", "year": "2022" }, { "authors": "R Li; S Yang; D A Ross; A Kanazawa", "journal": "", "ref_id": "b34", "title": "Ai choreographer: Music conditioned 3d dance generation with aist++", "year": "2021" }, { "authors": "H Liang; W Zhang; W Li; J Yu; L Xu", "journal": "", "ref_id": "b35", "title": "Intergen: Diffusion-based multi-human motion generation under complex interactions", "year": "2023" }, { "authors": "D C Liu; J Nocedal", "journal": "Math. Program", "ref_id": "b36", "title": "On the limited memory BFGS method for large scale optimization", "year": "1989" }, { "authors": "M Loper; N Mahmood; J Romero; G Pons-Moll; M J Black", "journal": "ACM Trans. Graph", "ref_id": "b37", "title": "SMPL: a skinned multi-person linear model", "year": "2015" }, { "authors": "I Loshchilov; F Hutter", "journal": "ICLR", "ref_id": "b38", "title": "Decoupled weight decay regularization", "year": "2019" }, { "authors": "Z Luo; J Cao; A Winkler; K Kitani; W Xu", "journal": "", "ref_id": "b39", "title": "Perpetual humanoid control for real-time simulated avatars", "year": "2023" }, { "authors": "N Mahmood; N Ghorbani; N F Troje; G Pons-Moll; M J Black", "journal": "", "ref_id": "b40", "title": "Amass: Archive of motion capture as surface shapes", "year": "2019" }, { "authors": "T Von Marcard; R Henschel; M J Black; B Rosenhahn; G Pons-Moll", "journal": "ECCV", "ref_id": "b41", "title": "Recovering accurate 3d human pose in the wild using imus and a moving camera", "year": "2018" }, { "authors": "D Mehta; O Sotnychenko; F Mueller; W Xu; S Sridhar; G Pons-Moll; C Theobalt", "journal": "DV", "ref_id": "b42", "title": "Single-shot multi-person 3d pose estimation from monocular RGB", "year": "2018" }, { "authors": "G Pavlakos; V Choutas; N Ghorbani; T Bolkart; A A A Osman; D Tzionas; M J Black", "journal": "CVPR", "ref_id": "b43", "title": "Expressive body capture: 3d hands, face, and body from a single image", "year": "2019" }, { "authors": "M Petrovich; M J Black; G Varol", "journal": "", "ref_id": "b44", "title": "Action-conditioned 3d human motion synthesis with transformer vae", "year": "2021" }, { "authors": "M Petrovich; M J Black; G Varol", "journal": "ECCV", "ref_id": "b45", "title": "Temos: Generating diverse human motions from textual descriptions", "year": "2022" }, { "authors": "M Plappert; C Mandery; T Asfour", "journal": "Big Data", "ref_id": "b46", "title": "The KIT motion-language dataset", "year": "2016" }, { "authors": "A Radford; J W Kim; C Hallacy; A Ramesh; G Goh; S Agarwal; G Sastry; A Askell; P Mishkin; J Clark; G Krueger; I Sutskever", "journal": "", "ref_id": "b47", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "D Rempe; Z Luo; X B Peng; Y Yuan; K Kitani; K Kreis; S Fidler; O Litany", "journal": "CVPR", "ref_id": "b48", "title": "Trace and pace: Controllable pedestrian animation via guided trajectory diffusion", "year": "2023" }, { "authors": "R Rombach; A Blattmann; D Lorenz; P Esser; B Ommer", "journal": "CVPR", "ref_id": "b49", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Y Shafir; G Tevet; R Kapon; A H Bermano", "journal": "", "ref_id": "b50", "title": "Human motion diffusion as a generative prior", "year": "2023" }, { "authors": "J Song; Q Zhang; H Yin; M Mardani; M Liu; J Kautz; Y Chen; A Vahdat", "journal": "", "ref_id": "b51", "title": "Loss-guided diffusion models for plug-and-play controllable generation", "year": "2023" }, { "authors": "Y Song; J Sohl-Dickstein; D P Kingma; A Kumar; S Ermon; B Poole", "journal": "ICLR", "ref_id": "b52", "title": "Scorebased generative modeling through stochastic differential equations", "year": "2021" }, { "authors": "S Starke; H Zhang; T Komura; J Saito", "journal": "ACM Trans. Graph", "ref_id": "b53", "title": "Neural state machine for characterscene interactions", "year": "2019" }, { "authors": "G Tevet; S Raab; B Gordon; Y Shafir; D Cohen-Or; A H Bermano", "journal": "ICLR", "ref_id": "b54", "title": "Human motion diffusion model", "year": "2023" }, { "authors": "J Tseng; R Castellon; C K Liu", "journal": "CVPR", "ref_id": "b55", "title": "EDGE: editable dance generation from music", "year": "2023" }, { "authors": "J Vaillant; K Bouyarmane; A Kheddar", "journal": "IEEE Trans. Vis. Comput. Graph", "ref_id": "b56", "title": "Multi-character physical and behavioral interactions controller", "year": "2017" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; L Kaiser; I Polosukhin", "journal": "NIPS", "ref_id": "b57", "title": "Attention is all you need", "year": "2017" }, { "authors": "J Wang; H Xu; M Narasimhan; X Wang", "journal": "NeurIPS", "ref_id": "b58", "title": "Multi-person 3d motion prediction with multi-range transformers", "year": "2021" }, { "authors": "J Wang; H Xu; J Xu; S Liu; X Wang", "journal": "CVPR", "ref_id": "b59", "title": "Synthesizing long-term 3d human motion and interaction in 3d scenes", "year": "2021" }, { "authors": "J Wang; S Yan; B Dai; D Lin", "journal": "CVPR", "ref_id": "b60", "title": "Scene-aware generative network for human motion synthesis", "year": "2021" }, { "authors": "Z Wang; Y Chen; T Liu; Y Zhu; W Liang; S Huang", "journal": "NeurIPS", "ref_id": "b61", "title": "HUMANISE: languageconditioned human motion generation in 3d scenes", "year": "2022" }, { "authors": "Z Xiao; T Wang; J Wang; J Cao; W Zhang; B Dai; D Lin; J Pang", "journal": "", "ref_id": "b62", "title": "Unified human-scene interaction via prompted chain-of-contacts", "year": "2023" }, { "authors": "Y Xie; V Jampani; L Zhong; D Sun; H Jiang", "journal": "", "ref_id": "b63", "title": "Omnicontrol: Control any joint at any time for human motion generation", "year": "2023" }, { "authors": "S Xu; Z Li; Y X Wang; L Y Gui", "journal": "", "ref_id": "b64", "title": "Interdiff: Generating 3d human-object interactions with physics-informed diffusion", "year": "2023" }, { "authors": "S Xu; Y Wang; L Gui", "journal": "ICLR", "ref_id": "b65", "title": "Stochastic multi-person 3d motion forecasting", "year": "2023" }, { "authors": "Y Yuan; J Song; U Iqbal; A Vahdat; J Kautz", "journal": "ICCV", "ref_id": "b66", "title": "Physdiff: Physics-guided human motion diffusion model", "year": "2023" }, { "authors": "J Zhang; Y Zhang; X Cun; Y Zhang; H Zhao; H Lu; X Shen; S Ying", "journal": "CVPR", "ref_id": "b67", "title": "Generating human motion from textual descriptions with discrete representations", "year": "2023" }, { "authors": "L Zhang; A Rao; M Agrawala", "journal": "ICCV", "ref_id": "b68", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" }, { "authors": "M Zhang; Z Cai; L Pan; F Hong; X Guo; L Yang; Z Liu", "journal": "", "ref_id": "b69", "title": "Motiondiffuse: Text-driven human motion generation with diffusion model", "year": "2022" }, { "authors": "Y Zhang; D Gopinath; Y Ye; J K Hodgins; G Turk; J Won", "journal": "", "ref_id": "b70", "title": "Simulation and retargeting of complex multi-character interactions", "year": "2023" }, { "authors": "K Zhao; Y Zhang; S Wang; T Beeler; S Tang", "journal": "", "ref_id": "b71", "title": "Synthesizing diverse human motions in 3d indoor scenes", "year": "2023" } ]
[ { "formula_coordinates": [ 6, 134.77, 324.02, 169.89, 16.08 ], "formula_id": "formula_0", "formula_text": "q (x t | x t-1 ) = N √ α t x t-1 , (1 -α t ) I" }, { "formula_coordinates": [ 6, 134.77, 378.24, 345.83, 22.58 ], "formula_id": "formula_1", "formula_text": "p θ (x t-1 | x t , p) = N (µ θ (x t , t, p), (1 -α t ) I)" }, { "formula_coordinates": [ 6, 182.71, 435.68, 297.88, 29.02 ], "formula_id": "formula_2", "formula_text": "µ θ (x t , t, p) = √ ᾱt-1 β t 1 -ᾱt x 0 (x t , t, p; θ) + √ α t (1 -ᾱt-1 ) 1 -ᾱt x t ,(1)" }, { "formula_coordinates": [ 6, 134.77, 469.85, 345.83, 27.6 ], "formula_id": "formula_3", "formula_text": "β t = 1 -α t and ᾱt = t s=0 α s . MDM's parameter θ is trained by min- imizing the ℓ 2 -loss ∥x 0 (x t , t, p; θ) -x * 0 ∥ 2 2" }, { "formula_coordinates": [ 8, 373.43, 230.06, 107.17, 11.57 ], "formula_id": "formula_4", "formula_text": "d nj = ∥c nj -R(µ t ) nj ∥ 2 ," }, { "formula_coordinates": [ 8, 134.77, 264.41, 242.54, 12.32 ], "formula_id": "formula_5", "formula_text": "d ′ ∈ R N ×J×3 , loss of one joint is l nj = ReLU d nj -d ′ nj" }, { "formula_coordinates": [ 8, 244.51, 337.68, 236.08, 26.24 ], "formula_id": "formula_6", "formula_text": "L(µ t , c) = n j m nj • l nj n j m nj ,(2)" }, { "formula_coordinates": [ 19, 137.13, 218.88, 181.86, 15.02 ], "formula_id": "formula_7", "formula_text": "x0 ← L-BFGS(L(x0, c)) # IK guidance 9:" }, { "formula_coordinates": [ 19, 134.77, 234.82, 161.65, 15.59 ], "formula_id": "formula_8", "formula_text": "µ t , Σt ← µ (x0, xt) , Σt # Posterior 11: xt-1 ∼ N (µ t ," }, { "formula_coordinates": [ 19, 137.13, 338.59, 157.04, 52.62 ], "formula_id": "formula_9", "formula_text": "# Motion ControlNet 4: {f } ← C xt, t, p, c f inal ; ϕ 5: # Motion Diffusion Model 6: x0 ← M (xt, t, p, {f }; θ) 7: µ t , Σt ← µ (x0, xt) , Σt # Posterior 8:" }, { "formula_coordinates": [ 19, 134.77, 392.14, 183.91, 15.02 ], "formula_id": "formula_10", "formula_text": "µ t ← L-BFGS(L(µ t , c)) # IK guidance 10:" }, { "formula_coordinates": [ 20, 134.77, 232.01, 117.12, 19.95 ], "formula_id": "formula_11", "formula_text": "{f } b ← C x b t , t, p b , c b ; ϕ 11:" }, { "formula_coordinates": [ 20, 134.77, 262.28, 164.21, 20.66 ], "formula_id": "formula_12", "formula_text": "x b 0 ← M x b t , t, p b , {f } b ; θ 14: µ a t , Σt ← µ (x a 0 , x a t ) , Σt # Posterior" }, { "formula_coordinates": [ 20, 162.42, 284.59, 138.13, 9.37 ], "formula_id": "formula_13", "formula_text": "µ b t , Σt ← µ x b 0 , x b t , Σt # Posterior" }, { "formula_coordinates": [ 20, 174.33, 311.82, 104.8, 9.37 ], "formula_id": "formula_14", "formula_text": "µ a t , µ b t ← L-BFGS(L(µ a t , µ b t ))" }, { "formula_coordinates": [ 20, 162.42, 327.76, 64.93, 9.37 ], "formula_id": "formula_15", "formula_text": "x a t-1 ∼ N (µ a t , Σt)" }, { "formula_coordinates": [ 20, 162.42, 338.05, 64.23, 9.37 ], "formula_id": "formula_16", "formula_text": "x b t-1 ∼ N (µ b t , Σt)" } ]
2023-11-27
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b2", "b8", "b24", "b19", "b34", "b34", "b34", "b19", "b19", "b34", "b19", "b34", "b12", "b39", "b12", "b39", "b13", "b24", "b9" ], "table_ref": [], "text": "Advanced image captioning based on large language models (LLMs) [3,8,9,25] has focused on the approach using big-scale models trained on ever-increasingly large-scale datasets, which is no longer viable. This is because the computational cost to train the models increases exponentially and, more importantly, updating training data is almost impossible that keep pace with the growth of novel objects in our daily lives. Sustaining ever-changing object knowledge with a reasonable cost is a pressing concern in LLMs-based models to truly unlock open-world comprehension.\n• GT: A person wearing ice skates on a wood floor. has less trainable parameters than others while achieving comparable results with SOTAs at scale. (Lower) Generated captions by SmallCap, BLIP-2, and our EVCAP for a commonsense-violating image from the WHOOPS dataset. × and ✓ indicate incorrect and correct predictions, respectively. Incorrect objects in captions are highlighted in red , while correct ones are in blue . SmallCap and BLIP-2 give incorrect predictions for \"ice skates\" and \"wood floor\", respectively, while our EVCAP utilizes an external visualname memory to enhance attention to objects within the image, leading to superior performance for image captioning.\nRetrieval-augmented image captioning [20,35] is emerging as an alternative since it considerably reduces training costs in both time and data while producing encouraging results. Nonetheless, with their huge datastore, it is obvious that LLMs would imitate the given texts, limiting their ability to describe open-world objects properly. For instance, SmallCap [35] considers the words \"skateboard\" and \"wooden floor\" to be a pair regardless of visual appearances containing a commonsense-violating pair of \"ice skates\" and \"wood floor\" (Fig. 1, lower). Addition-ally, prompting the LLMs given a lot of retrieved texts becomes cumbersome, requiring more trainable parameters. Fig. 1 (upper) shows that the CIDEr scores obtained by a lightweight SmallCap [35] with 43M trainable parameters are far away from those obtained by a heavy REVEAL [20] with 2.1B trainable parameters. Beyond that, due to the frequent occurrence of new objects, access to their sample texts is not always feasible, making the memory utilized in [20,35] difficult to grow. We thus aim to streamline the external memory used in previous work [20,35] by storing a sufficiently small amount of object information. And, of course, not only does the model not stereotype the example sentences, but the number of trainable parameters would be reduced drastically as a result of the causation (Fig. 1).\nWe follow [13,40] to construct a key-value memory where the key is represented by object's features, and the value corresponds to object's name. Unlike [13,40], which rely on object definition as the key, our method leverages the visual appearance of the object as the key because of the abundance of object images readily available on the internet. We propose an external visual-name memory tailored for ease of expansion and cost-effectiveness in upholding up-to-date object information. We present a highly effective retrieval-augmented LLMs-based image captioning method, called EVCAP, that prompts frozen LLMs with object names retrieved from our proposed memory for openworld comprehension. EVCAP contains a frozen image encoder ViT [14] and Q-Former [25] with trainable image query tokens for object retrieval, an attentive fusion module, a trainable linear layer for mapping between vision and language latent spaces, and a frozen LLM decoder [10] for generating captions. Specifically, the attentive fusion module feeds retrieved object names and visual features into a customized frozen Q-Former using trainable object name query tokens to implicitly reduce the presence of superfluous object names. As a result, EVCAP amounts to only 3.97M trainable parameters. Once trained, the model can be adapted to new domains and large-scale data without further fine-tuning or retraining. Our contributions are as follows:\n• We provide an extensible external visual-name memory with minimal but useful object information, which enables LLMs-based models to comprehend the open world. • We present a remarkably lightweight and highly efficacious retrieval-augmented image captioning EVCAP with 3.97M trainable parameters. On in-/out-domain benchmarks and synthetic commonsense-violating dataset, EVCAP trained solely on COCO dataset competes with other lightweight methods by a margin while being on par with other specialist SOTAs." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b3", "b41", "b31", "b11", "b6", "b30", "b32", "b36", "b8", "b24", "b28", "b18", "b38", "b25", "b44", "b14", "b31", "b15", "b34", "b42", "b17" ], "table_ref": [], "text": "Image captioning aims to describe the contents of a given image. It can be roughly divided into two approaches: non-LLMs-based methods and LLMs-based ones. The former approaches [4,22,42] typically employ a visual encoder and a language decoder in an end-to-end fashion to generate captions. However, they are incapable of describing open-world objects. The latter one leverages pre-trained large-scale vision models (CLIP [32], ViT [12]) and LLMs (GPTs [7,31], T5 [33], LLaMA [37]) by bridging the gap between two modalities using either pre-training with largescale data or the learned mapper or prompt techniques. LLMs-based models [8,9,25,29] demonstrate advancements in image captioning challenges, allowing the capacity to describe anything as long as pre-trained vision models can recognize it. Our method belongs to the LLMs-based approaches, but instead of relying fully on the pre-trained vision model, we use object names retrieved from the external memory to augment LLMs-based image captioning. Novel object captioning is a branch of image captioning that describes images containing objects that were not seen during training. Non-LLMs-based methods explore more objects by learning from unpaired image-sentence sources (DCC [19], NOC [39]) or relied on novel object detectors to recognize novel concepts (NBT [28], OSCAR [26] and VinVL [45]). LLMs-based methods such as ViECap [15] leverage the pre-trained CLIP [32] to obtain object entities. Nevertheless, the cut-off in training time of the pre-trained object detector or CLIP prevents it from detecting novel objects that arise quickly in reality. Unlike earlier work, we can readily update our recognition of novel concepts by adding them to external memory, ensuring that we keep any new objects from the past and even the future. Retrieval-augmented image captioning is a recently popular approach that augments the captioning model with retrieved information for better open-world understanding. AoANet [16] uses a memory bank of image-sentence pairs and target words. SmallCap [35] employs image-to-text retrieval to obtain sampled captions from a captions datastore. RA-CM3 [44] retrieves documents from an external memory of a mixture of text and image via a dense multimodal retriever. EXTRA [34] and Re-ViLM [43] exploit the similarity of the input image and vision candidates to retrieve captions. Unlike previous methods, our external memory contains visual-name pairs to avoid redundant information in the external captions/documents. In addition, we use an attentive fusion module to mitigate the effects of irrelevant retrieved object names on caption generation. As discussed above, challenge (1) can be resolved by utilizing the visual appearance of objects. However, if we restrict our memory to only a visual-name pair for each object, our memory will be lacking in diversity. Therefore, we gather several images for each target object. Additionally, we keep the synthetic images in our memory to avoid the harm that synthetic images might cause to our method, as pointed out in [18]. With the capability to collect images from the internet, EVCAP can be easily expanded to include novel objects from the real world effortlessly." }, { "figure_ref": [ "fig_1" ], "heading": "Proposed EVCAP", "publication_ref": [ "b10", "b24", "b28", "b34", "b45" ], "table_ref": [], "text": "We base our method on a frozen pre-trained vision model and LLM with several trainable layers (Fig. 2), giving in a model that is cheap to train. To guide the LLM, we adopt a recently popular approach called prompting as in [11,25,29,35,46]. We begin by matching the learned visual features from the input image with image embeddings stored in memory, retrieving object names. We also introduce an attentive fusion module designed to implicitly remove irrelevant retrieved names. Finally, following the attentive fusion, we combine the learned visual features and object name features to form a prompt for the LLM to generate a caption, thus addressing challenge (2)." }, { "figure_ref": [], "heading": "External visual-name memory", "publication_ref": [ "b16", "b35", "b20" ], "table_ref": [], "text": "To build the external visual-name memory, we first collect image-name pairs from the external data source. After that, we encode these images into image embeddings, which serve as keys in memory, and use their names as values.\nExternal data source. We utilize object images from LVIS dataset [17] to construct our external visual-name memory M. Specifically, we use 1203 objects in LVIS, where we randomly select from one to ten images for each object, amounting to 8581 object images. Furthermore, as mentioned in Sec. 3.1, we also incorporate synthetic images in our memory construction. Using stable diffusion [36], we generate five additional images for each object, with a prompt of \"a photo of {object name}\", resulting in a total of M = 14596 (8581 + 5 × 1203) images. Each object image X i is associated with an object name v i . Note that many object images may share the same object name. For the sake of simplicity, we may regard each image as corresponding to a single name. In summary, we have M image-name pairs {(X i , v i )} M i=1 for external memory construction. External memory construction. For each image X i , we use a frozen vision encoder E(•) (see Sec. 3.3 for detail) to project it into 32 embeddings with the size of 1 × 768 each:\n{k i 1 , k i 2 , • • • , k i 32 } = E(X i ).\nWe then average 32 embeddings to produce a single embedding k i (1 × 768) that serves as the key (visual) in M. The paired object name v i acts as its value (name). Consequently, we have the visual-name memory M = {(k i , v i )} M i=1 which is indexed using FAISS [21], facilitating rapid searches based on similarity measures. Our memory can be expanded effortlessly by gathering additional visual-name pairs (see Sec. 5.3)." }, { "figure_ref": [], "heading": "Object names retrieval", "publication_ref": [ "b24", "b13", "b31" ], "table_ref": [], "text": "Image encoding. We feed a frozen vision encoder E image X and image query tokens T img to produce visual features Q. To enable the retrieval process controllable, we make image query tokens to be trainable. Thus, the image encoding process can be summarized as Q = E(X, T img ). We use the BLIP-2 pre-trained vision encoder [25], which consists of a pre-trained vision transformer ViT-g [14] outputting image features (257 × 1408), and a Q-Former receiving image features producing |Q| = 32 learned visual features (1 × 768 each). We denote Q = {q 1 , q 2 , ..., q 32 }. Retrieval. Having obtained Q, we calculate the cosine similarity between the query q j ∈ Q and the key k i ∈ M. The similarity calculation is given by SIM(q j , k i ) = 32]. Given each q j , we select one key with the highest similarity score, resulting in 32 key-value candidates {k best j , v best j } 32 j=1 . After that, we filter out candidates with repeated object names (values), and then select the top-K values. In particular, we determine the index j from the key that has the highest SIM score. These selected values v best j are redefined as the new notation v l in the retrieved top-K object names for the input image, which can be summarized as follows:\nq ⊤ j k i ∥qj ∥∥k i ∥ , where i ∈ [1, M ], j ∈ [1,\n{k best j , v best j } = arg max k i SIM q j , k i , j = arg max j SIM(q j , k best j ), v l ← v best j ,\nwhere l ∈ [1, K]. As a result, the retrieved top-K object names are {v l } K l=1 ." }, { "figure_ref": [], "heading": "Attentive fusion", "publication_ref": [ "b24" ], "table_ref": [], "text": "Since the object names obtained from the retrieval process may be redundant, we develop an attentive fusion module to selectively distill object name features. The retrieved object names {v l } K l=1 are concatenated together into a sequence S, each separated by a delimiter:\nS = {v 1 , [SEP], v 2 , [SEP], • • • , [SEP], v K }.\nThe sequence S and visual features Q are fed into a customized Q-Former F(•), which is constructed from the frozen pretrained Q-Former as we used in vision encoder E. Nonetheless, in order to enable object names to get attention from visual features, we switch the image embedding port and the text instruction port (see [25] for architecture detail). Like in the image encoding process in Sec. 3.3, we make the object name query tokens T obj learnable during training to assist in learning object name features related to the caption. The size of T obj is P × 768, where P indicates the number of object name query tokens. We get the object name features V = F(S, Q, T obj )." }, { "figure_ref": [], "heading": "Caption generation", "publication_ref": [ "b9", "b36", "b45" ], "table_ref": [], "text": "Before inputting the visual features Q and object name features V into the LLM decoder, we concatenate (⊕) them and use a linear layer ϕ(•) to project them into the input latent space of the LLM as ϕ(Q ⊕ V). The LLM used for caption generation in this work is the pre-trained Vicuna-13B [10], an open-source chatbot constructed from LLaMA [37]. During training and evaluation, we design a prompt in a conversational format, that is similar to [46]: ###Human: <Img><ProjFeature></Img> Describe this image in detail. ###Assistant: in which, ProjFeature denotes the projected feature ϕ(Q ⊕ V) after the linear layer. In training phase, given input caption tokens {c i } L i=1 , the LLM decoder concatenates the embedded prompt {w i } N i=1 and the embedded caption tokens {c i } L i=1 as input, and predicts the caption tokens in an autoregressive fashion, while in the evaluation phase, we only need to input the embedded prompt. We train EV-CAP by minimizing the cross-entropy loss in an end-to-end way:\nL θ = - L i=1 log p θ (c i | w 1 , ...w N , c 1 , ..., c i-1\n), in which θ indicates the trainable parameters." }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Training setup", "publication_ref": [ "b24", "b13", "b22", "b9", "b26" ], "table_ref": [], "text": "Implementation. EVCAP uses the same image encoder as in BLIP-2 [25], consisting of a ViT-g [14] and their pretrained Q-Former. Since we intend to obtain object name features through cross-attention between retrieved object names and visual features, we develop a customized Q-Former, which consists of BERT [23] with cross-attention layers inserted at every other transformer block. We use a frozen Vicuna-13B [10] as the caption generator. Training dataset. For all experiments, we exclusively train EVCAP using the training set of COCO dataset [27], consisting of 82k images and 5 captions per images. The entire training process takes about 3 hours on 4 A6000 GPUs, using mixed precisions (more details in the supplementary)." }, { "figure_ref": [], "heading": "Evaluation setup", "publication_ref": [ "b14", "b40", "b1", "b29", "b8", "b44", "b15", "b39", "b12", "b14", "b10", "b25", "b24", "b19", "b45", "b34", "b28", "b4", "b40", "b8" ], "table_ref": [], "text": "Evaluation dataset. We evaluate EVCAP, trained using the COCO training set, across four datasets: its test set, two challenging benchmarks -NoCaps validation set and Flickr30k test set, and a synthetic commonsense-violating dataset -WHOOPS. We adhere follow prior work [15,41] to use the same images of Karpathy split [22] on COCO test set, NoCaps [2] validation set, and Karpathy split on Flickr30k [30] test set. In addition, WHOOPS [6] is a synthetic image captioning dataset comprising 500 synthetic commonsense-violating images and 2500 paired captions. Compared methods. We compare EVCAP with several SOTAs. According to the trainable parameters size, they can be divided into 1) Heavyweight-training (between [9] 1.6B\n17B - - 149.1 - - - - - - - 127.0 - - - PaLI-X UL2-32B [8] 2.2B 55B - - 149.2 - - - - - - - 126.3 - - -\n100M to 5B): VinVL [45], AoANet [16], NOC-REK [40], RCA-NOC [13], ViECap [15], InstructBLIP [11], OS-CAR [26], BLIP [24], BLIP-2 [25], REVEAL [20]; 2) Lightweight-training (less than 100M): MiniGPT4 [46], SmallCap [35], ClipCap [29]; and also 3) Specialist SO-TAs with huge trainable parameters (larger than 5B): Qwen-VL [5], CogVLM [41], PaLI [9], PaLI-X [8]. Among these methods, AoANet, NOC-REK, RCA-NOC, REVEAL, and SmallCap are retrieval-augmented captioning methods." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Results on in-/out-domain benchmarks", "publication_ref": [], "table_ref": [], "text": "We assess EVCAP against SOTAs on both in-domain and out-domain benchmarks. Qualitative results. Fig. 3 presents a comparison of captions generated by our EVCAP and three SOTA models across three benchmarks. The captions of SmallCap are generated by its publicly accessible demo [1]. We generate captions of MiniGPT4 and BLIP-2 using their respective pre-trained models. As a lightweight and retrievalaugmented captioning method, SmallCap struggles to produce accurate captions for given images, primarily because it relies on retrieved captions laden with extraneous information. MiniGPT4, though aligned with the primary content of images, sometimes misses certain objects like \"trees\" and \"headphones\". This oversight stems from its " }, { "figure_ref": [], "heading": "BLIP-2:", "publication_ref": [], "table_ref": [], "text": "A laptop computer with a picture of two men on it.\nEVCap: A laptop computer with a picture of two men on the screen." }, { "figure_ref": [], "heading": "NoCaps Val", "publication_ref": [], "table_ref": [], "text": "GT: Two men are riding on a wooden vehicle pulled by two donkeys. SmallCap: A donkey pulling a cart with a man in the background. MiniGPT4: Two men riding on a donkey in the dirt. BLIP-2: Two men riding a horse drawn cart through a field.\nEVCap: Two men riding in a cart pulled by two donkeys.\nGT: A very young child in a denim baseball cap eats a green apple. SmallCap: A young boy holding an apple in his hand. MiniGPT4: A baby sitting in a high chair eating an apple. BLIP-2: A baby sitting in a white chair eating a green apple. EVCap: A toddler eating a green apple while wearing a hat." }, { "figure_ref": [], "heading": "Flickr30k Test COCO Test", "publication_ref": [], "table_ref": [], "text": "GT: A green bus driving through a rural area with trees in the background. SmallCap: A bus driving down a street next to trees. MiniGPT4: A green bus is driving down the street. BLIP-2: A green bus driving down a road with trees in the background. EVCap: A green bus driving down a road next to trees.\nGT: A woman in a blue top with headphones and two cellphones. SmallCap: A woman sitting in front of a laptop computer. MiniGPT4: A woman sitting on a couch holding two phones. BLIP-2: A woman sitting on a couch with two cell phones. EVCap: A woman wearing headphones holding two cell phones.\nFigure 3. Examples of captions generated by our EVCAP and three SOTA methods on COCO test set, NoCaps validation set, and Flickr30k test set. GT refers to the Ground Truth captions. Incorrect objects in captions are highlighted in red , while correct ones are in blue . Our EVCAP correctly generates captions across different datasets, showing performance comparable to BLIP-2. In contrast, SmallCap and MiniGPT4 sometimes either miss objects or include incorrect ones in their generated captions. focus on the main objects in images, without integrating additional cues for other objects provided by the retrieved object names. In contrast, the captions generated by our EVCAP are comparable to those of BLIP-2." }, { "figure_ref": [ "fig_2" ], "heading": "Results on commonsense-violating data", "publication_ref": [ "b35" ], "table_ref": [], "text": "To explore our EVCAP's capability in describing contents in open-word settings, we further evaluate it on WHOOPS dataset, which contains commonsense-violating images. Quantitative results. In Tab. 2, we compare the performance of EVCAP, MiniGPT4, BLIP, and BLIP-2 on WHOOPS dataset. This dataset is particularly challenging due to its inclusion of unusual objects [6]. Initially, as an end-to-end trained model, our EVCAP exhibits performance similar to MiniGPT4. However, there is a noticeable improvement in the CIDEr score, after the external mem- Table 3. Ablation study on components prior to the LLM decoder in EVCAP. The result of \"+ Attentive fusion\" demonstrates the substantial impact of the external visual-name memory. ory is enriched with 2396 new objects from the WHOOPS dataset, each represented by 5 synthesized images generated using stable diffusion [36]. It highlights the effectiveness of our idea of incorporating an expandable external memory into the captioning model for open-world comprehension. Qualitative results. Fig. 4 illustrates the captions generated by EVCAP, EVCAP (w/WHOOPS), and three SOTAs for two images from the WHOOPS dataset. The first image challenges common sense as it unusually pairs \"Einstein\" with \"racing car\". While all SOTAs simply refer to \"Einstein\" as \"a man\", our EVCAP and EVCAP (w/WHOOPS) correctly identify him. The second image shows another unusual example. Similar to other methods except for BLIP-2, EVCAP can not recognize \"blue cartoon character\" as \"Pikachu\", while EVCAP (w/WHOOPS) successfully predicts it because of the updated memory. In these two images, SmallCap and MiniGPT4 tend to generate captions with hallucinatory objects, a result of commonsenseviolating contents present in the images." }, { "figure_ref": [ "fig_3", "fig_2" ], "heading": "Detailed analysis", "publication_ref": [ "b37", "b34" ], "table_ref": [], "text": "Ablation study. We assess the contribution of each component prior to the LLM decoder in EVCAP by incrementally 5, where captions from Baseline and Baseline+ inaccurately include objects like \"couch\" and \"bed\", and Baseline+ overlooks \"hand\". Exploration for external memory expandability. To demonstrate the scalability of the external memory in EV-CAP, we visualize the visual features stored in LVIS external memory, and newly synthesized data from objects appearing in the WHOOPS dataset. We employ t-SNE [38] to plot visual features after reducing their dimensions to 2-D (Fig. 6). For clear visualization, we only randomly display 3649 visual features in LVIS memory, and add 479 visual features from WHOOPS objects. Among them, 35 samples are randomly labeled. The result shows a clear clustering of LVIS objects (blue) in the external memory, as well as the successful integration and appropriate localization of new objects from WHOOPS (red) into these clusters. This pattern not only confirms the distinctiveness of visual features already present in the memory but also demonstrates the potential to accurately incorporate and differentiate new objects introduced from updated data. These findings highlight our external memory's ability to expand and maintain its effectiveness even as new data is incorporated. Impact of external memory size. We examine the impact of external memory size in Tab. 4. On the one hand, we randomly remove 30%, 60%, and 90% data in the external memory constructed from LVIS objects. The results show the performance gradually degrades on NoCaps as reducing 30% and 90% LVIS. Despite some unexpected increases in certain results on NoCaps (5th row) and Flickr30k (4th -5th rows), they do not alter the overall downward trend. Similar phenomena are also noted in SmallCap [35], we speculate it is due to data distribution. On the other hand, as we infuse WHOOPS knowledge into LVIS memory, there is a slight improvement on NoCaps (out) and Flickr30k. These observations validate the model's capability to effectively retrieve object names from an updated memory, enhancing its performance in generating captions. Impact of the number of retrieved object names. We investigate how the number of retrieved object names K (Sec. 3.3) affect EVCAP in Fig. 7. We train the model with K from 1 to 20 and evaluate the performance under CIDEr on all three benchmarks. From the results, we can find that the model works worst on the out-domain dataset (NoCaps) when only one retrieved object name is used. As we gradually add more object names, performance fluctuates but improves. This pattern aligns with our intuition that when using one object name, the model will make errors or miss some objects in generated captions due to the incorrect object name. When we add more object names, we increase the error tolerance, and the attentive module in EVCAP automatically pays more attention to image-related object names, thus improving results. Furthermore, we observe that setting K to 10 yields relatively optimal overall performance, validating the choice of K = 10 in EVCAP. Analysis with different decoders. To explore the influence of different LLMs decoders on our EVCAP, we experiment by substituting Vicuna-13B with GPT2 and Vicuna-7B, as detailed in Tab. 5. With GPT2 as the decoder, EVCAP still markedly surpasses other GPT2-based models, achieving impressive gains of 11.3 and 10.0 under CIDEr on COCO and Flickr30k, compared to SmallCap. When employing Vicuna-7B, the comparison of performance trends mirrors those observed with Vicuna-13B, further attesting to the robustness and adaptability of EVCAP across different LLM decoders. Notably, both SmallCap, which retrieves captions, and our GPT2-based EVCAP, which retrieves object names, use the same GPT2 decoder. Therefore, their comparison also underscores the effectiveness of our method's object name retrieval and attentive fusion strategy.\nLimitations. First, EVCAP cannot retrieve all objects that appear in the given image, leading to incomplete image descriptions as the second example in Fig. 4. We will investigate integrating object detection with image captioning to enhance completeness. Second, our focus on object representation restricts consideration of other crucial captioning elements, affecting overall performance. Similar to all models trained on the COCO dataset, our EVCAP inherits its captioning style, which limits its ability to generate varied styles. This limitation is reflected in our relatively modest performance improvements in Tab. 2, compared to MiniGPT4. We will overcome this limitation by exploring methodologies that encourage style diversity in the future." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We further advance image captioning in real-world scenarios by introducing EVCAP, a novel image captioning model with object names retrieved from an external visual-name memory. The external memory is easily expandable, allowing for effortless updates with new object visuals and names. EVCAP stands out for its efficiency, comprising merely 3.97M trainable parameters, yet delivering robust performance. We extensively compare EVCAP with SOTAs on various benchmarks and commonsense-violating data, demonstrating its significant superiority in performance." }, { "figure_ref": [], "heading": "Supplementary Material for EVCAP: Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension", "publication_ref": [], "table_ref": [], "text": "This supplementary material complements our paper with the following sections: First, we delve into the implementation specifics of our EVCAP, which were not covered in the main paper (see Sec. A). Second, we offer an expanded discussion on the external visual-name memory, as utilized in the main paper (see Sec. B). Finally, we present additional results to evaluate the effectiveness of EVCAP (see Sec. C)." }, { "figure_ref": [], "heading": "A. Implementation Details", "publication_ref": [], "table_ref": [], "text": "Our method is based on Pytorch and is trained within one epoch with a batch size of 24 using mixed precisions. We optimize the model using AdamW, setting the weight decay at 0.05, and using β 1 and β 2 values of 0.9 and 0.99, respectively. A cosine learning rate (LR) decay strategy is adopted, starting with an initial LR of 1e-4. The model undergoes 5000 linear warm-up steps, beginning with a start LR of 1e-6. During the evaluation phase, we use a beam search strategy with a beam size of 5 to generate captions." }, { "figure_ref": [], "heading": "B. External Visual-Name Memory", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "B.1. LVIS memory", "publication_ref": [], "table_ref": [], "text": "As stated in Sec. 3.2 of the main paper, we utilize 1203 objects from the LVIS dataset. For each of these objects, we randomly select between one and ten images from LVIS. Additionally, we enrich our data by incorporating five synthetic images for each object, created using stable diffusion. We show two samples of this external visual-name memory, constructed using objects from LVIS in Fig. A." }, { "figure_ref": [], "heading": "B.2. WHOOPS memory", "publication_ref": [ "b39", "b12", "b14", "b10", "b25", "b24", "b19" ], "table_ref": [], "text": "To illustrate the scalability of the external memory in EV-CAP, we expand it by integrating WHOOPS knowledge into the original external visual-name memory in Sec. 5.2 and Sec. 5.3 of the main paper. Specifically, we focus on objects that are mentioned in the answers of VQA annotations in the WHOOPS dataset because of their conciseness and emphasis on key objects. For each of these objects, we produce five synthetic images employing stable diffusion. Two examples from this augmented memory, featuring newly added object images and their corresponding names, are presented in Fig. B. input images, such as \"tie\" and \"mouse\". The same hallucinatory object \"mouse\" is also found in the retrieved captions, indicating that SmallCap's diminished performance is largely due to its reliance on retrieved captions containing irrelevant information. In comparison, our EVCAP demonstrates a performance on par with BLIP-2. --NOC-REK* [40] 8d 2 RTX3090 RCA-NOC* [13] 1d 8 A100 ViECap GPT2 [15] --InstructBLIP Vicuna-13B [11] 1.5d 16 A100 OSCAR [26] 74h 1 V100 BLIP [24] -2 16-GPU nodes BLIP-2 FlanT5-XL [25] ∼9d 16 A100 REVEAL* T5 [20] 5d 256 CloudTPUv4 chips" }, { "figure_ref": [], "heading": "Lightweight-training models", "publication_ref": [ "b45", "b34", "b28" ], "table_ref": [], "text": "MiniGPT4 Vicuna-13B [46] 10h 4 A100 SmallCap* GPT2 [35] 8h 1 A100 ClipCap GPT2 [29] 6h 1 GTX1080 EVCAP* Vicuna-13B 3h 4 A6000" }, { "figure_ref": [], "heading": "Specialist SOTAs", "publication_ref": [ "b4", "b40", "b8" ], "table_ref": [], "text": "Qwen-VL Qwen-7B [5] --CogVLM Vicuna-7B [41] 1d 4096 A100 PaLI mT5-XXL [9] --PaLI-X UL2-32B [8] -with various SOTA models. Due to the diversity of GPUs employed across different models, drawing a direct comparison is challenging. Nevertheless, it's evident that the training time for our EVCAP is comparatively shorter than most models. Number of object name query tokens. We explore the impact of varying the number of retrieved object names P (Sec. A pair of men standing on snowboards giving the thumbs up.\nA snowboarder in a green jacket and a skier." }, { "figure_ref": [], "heading": "SmallCap' retrieved captions:", "publication_ref": [], "table_ref": [], "text": "A large man speaking in front of a crowd. A man at a podium speaking at an event.\nA person speaking something with a mike in his hands.\nA man is speaking while a picture is taken of him." }, { "figure_ref": [], "heading": "SmallCap' retrieved captions: An orange and white electric gadget sits on a red surface. A white joystick held by a child for a video game.", "publication_ref": [], "table_ref": [], "text": "A video game control is held by grips. An electronic gadget connected to a keyboard and wireless mouse. " }, { "figure_ref": [], "heading": "SmallCap' retrieved captions:", "publication_ref": [], "table_ref": [], "text": "A small slice of gourmet flat bread pizza." }, { "figure_ref": [], "heading": "Flat bread pizza slices piled on top of each other. A little personal sized pizza cut into squares.", "publication_ref": [], "table_ref": [], "text": "A collection of differently topped pizza slices on a plate." }, { "figure_ref": [], "heading": "SmallCap' retrieved captions: Two cellphones have cute homemade cellphone covers. Several crocheted items with yarn scissors and crochet hooks. A series of crafting tools are laid out mostly for sewing.", "publication_ref": [], "table_ref": [], "text": "A number of craft items sitting on a table. A person looks into an aquarium at a large animal swimming. This is a large fish that is in an aquarium. Some animals that are in the water together." }, { "figure_ref": [], "heading": "SmallCap' retrieved captions:", "publication_ref": [], "table_ref": [], "text": "An image of a bathroom setting with double faucet. A very nice looking clean cut and contemporary styled wash basin.\nA modern bathroom with a toilet and pedestal sink.\nA washroom area with a sink soap and glasses." }, { "figure_ref": [], "heading": "SmallCap' retrieved captions:", "publication_ref": [], "table_ref": [], "text": "An apple computer ipod and other electronic devices.\nA variety of apple ipod products on display. Multiple ipods on a desk surrounded by computers. Man at computer holding ipod and interfacing the two devices. A black and brown dog with its tongue out." }, { "figure_ref": [], "heading": "SmallCap' retrieved captions:", "publication_ref": [], "table_ref": [], "text": "A man fishing while standing on some rocks next to the ocean. A man fishes from some rocks with a cargo ship in the distance.\nA man with a rainbow umbrella fishing off a rock coast. Fisherman with a pole holding needle nose pliers." }, { "figure_ref": [], "heading": "SmallCap' retrieved captions:", "publication_ref": [], "table_ref": [], "text": "A runner is touching base while another player is waiting to catch the ball.\nA runner sliding onto base while another player catches the ball. A female baseball player unleashes a hit and goes for the run. A girls softball game with a play at home base as an umpire watches to make the call. " } ]
Large language models (LLMs)-based image captioning has the capability of describing objects not explicitly observed in training data; yet novel objects occur frequently, necessitating the requirement of sustaining up-to-date object knowledge for open-world comprehension. Instead of relying on large amounts of data and scaling up network parameters, we introduce a highly effective retrievalaugmented image captioning method that prompts LLMs with object names retrieved from External Visual-name memory (EVCAP). We build ever-changing object knowledge memory using objects' visuals and names, enabling us to (i) update the memory at a minimal cost and (ii) effortlessly augment LLMs with retrieved object names utilizing a lightweight and fast-to-train model. Our model, which was trained only on the COCO dataset, can be adapted to out-domain data without additional fine-tuning or retraining. Our comprehensive experiments conducted on various benchmarks and synthetic commonsense-violating data demonstrate that EVCAP, comprising solely 3.97M trainable parameters, exhibits superior performance compared to other methods of equivalent model size scale. Notably, it achieves competitive performance against specialist SOTAs with an enormous number of parameters.
EVCAP: Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension
[ { "figure_caption": "•Figure 1 .1Figure 1. Overall comparison of our EVCAP and SOTAs. (Upper) Comparison of the number of trainable parameters, CIDEr score on COCO and NoCaps datasets. The size of each circle reflects the log number of trainable parameters. EVCAP (3.97M)has less trainable parameters than others while achieving comparable results with SOTAs at scale. (Lower) Generated captions by SmallCap, BLIP-2, and our EVCAP for a commonsense-violating image from the WHOOPS dataset. × and ✓ indicate incorrect and correct predictions, respectively. Incorrect objects in captions are highlighted in red , while correct ones are in blue . SmallCap and BLIP-2 give incorrect predictions for \"ice skates\" and \"wood floor\", respectively, while our EVCAP utilizes an external visualname memory to enhance attention to objects within the image, leading to superior performance for image captioning.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure2. Schematic of our proposed EVCAP. It consists of an external visual-name memory with image embeddings and object names (upper), a frozen ViT and Q-Former equipped with trainable image query tokens, an attentive fusion module developed by a customized frozen Q-Former and trainable object name query tokens, and a frozen LLM with a trainable linear layer (lower). The ViT and Q-Former extract learned visual features from the input image, which are then used to retrieve object names from the external memory. These retrieved object names and learned visual features undergo cross-attention in the customized Q-Former, creating refined object name features. Finally, the object name features combined with visual features are fed into the LLM post a linear layer for generating captions.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Examples of captions generated by our EVCAP, EVCAP (w/ WHOOPS), and three SOTAs on WHOOPS dataset. Incorrect objects are highlighted in red , while correct ones are in blue .", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Visualization of the captions generated from ablation study on the NoCaps validation set. We also show the retrieved object names by EVCAP, presented in gray. Incorrect objects in captions are highlighted in red , while correct ones are in blue .", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "42 Figure 6 .426Figure 6. Visualization of the visual features in external memory using t-SNE. For visual features in LVIS dataset's objects (blue), the related objects fall in the same cluster. After adding more visual features of synthesized images from WHOOPS' objects, new objects (red) are located at appropriate clusters (zoom-in view).", "figure_data": "", "figure_id": "fig_4", "figure_label": "426", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "\"A moose isstanding in…Customized Q-Formerthe water with palmtrees in thebackground.\"…Image query tokens3.1. Idea of EVCAPWe aim to build a retrieval-augmented LLMs-based imagecaptioning model with a sufficiently small yet informativeexternal memory. It involves two challenges: (1) construct-ing an expandable external memory, and (2) building an ef-fective LLMs-based model using retrieved object names.", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Helmet: a protective…headgear made of hard material to resistblows…Helmet: a protective headgear made of hard material to resist…blowsValue: Object namesPikachu Moose… Helmet", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Quantitative comparison against SOTA methods on three common image captioning benchmarks. * denotes using a memory bank. We report the size of training data and parameters; BLEU@4 (B@4), METEOR (M), CIDEr (C), and SPICE (S) scores on COCO test set; C and S scores on in-domain, near-domain, out-domain and overall data of NoCaps validation set; C and S scores on Flickr30k test set. Higher score is better. Bold indicates the best results among compared methods, normal indicates the second best results.", "figure_data": "TrainingCOCONoCaps valFlickr30kMethodDataPara.TestIn-domainNear-domainOut-domainOverallTestB@4MCSCSCSCSCSCSHeavyweight-training modelsVinVL [45]8.9M110M38.230.3129.323.696.813.590.713.187.411.690.912.8--AoANet+MA* [16]COCO-38.028.7121.021.8----------NOC-REK* [40]COCO110M----104.714.8100.214.1100.713.0100.914.0--RCA-NOC* [13]COCO110M37.429.6128.423.192.212.987.812.687.511.588.312.4--ViECap GPT2 [15]COCO124M27.224.892.918.261.110.464.39.965.08.666.29.547.913.6InstructBLIP Vicuna-13B [11] 129M188M----------121.9-82.8-OSCAR [26]4.1M338M37.430.7127.823.583.412.081.612.077.610.681.111.7--BLIP [24]129M446M40.4-136.7-114.915.2112.114.9115.314.4113.214.8--BLIP-2 FlanT5-XL [25]129M1.2B42.4-144.5-123.716.3120.215.9124.815.1121.615.8--REVEAL* T5 [20]1.3B2.1B--145.4-------123.0---Lightweight-training modelsMiniGPT4 Vicuna-13B [46]5M3.94M38.029.6129.623.499.014.8106.915.3110.814.9108.815.178.416.9SmallCap* GPT2 [35]COCO7M37.027.9119.721.3--------60.6-ClipCap GPT2 [29]COCO43M33.527.5113.121.184.912.166.810.949.19.665.810.9--EVCAP* Vicuna-13BCOCO3.97M41.531.2140.124.7111.715.3119.515.6116.514.7119.315.384.418.0Specialist SOTAsQwen-VL Qwen-7B [5]1.4B9.6B----------121.4-85.8-CogVLM Vicuna-7B [41]1.5B6.5B--148.7-----132.6-128.3-94.9-PaLI mT5-XXL", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Quantitative results on commonsense-violating data -WHOOPS dataset. EVCAP (w/ WHOOPS) denotes EVCAP using the memory expanded by WHOOPS objects. The results reveal the open-world comprehension ability and expandability of EVCAP.", "figure_data": "MethodB@4MCSOnly pre-trained modelsBLIP [24] (from [6])13-65-BLIP-2 FlanT5-XXL [25] (from [6])31-120-BLIP-2 FlanT5-XXL [25] (reproduced)2826.7 93.1 17.9Finetuned models on COCOMiniGPT4 [46]24.2 26.7 84.8 18.2BLIP [24]22.9 25.0 79.3 17.1BLIP-2 FlanT5-XL [25]25.8 27.0 89.1 18.3End-to-end trained models on COCOEVCAP24.1 26.1 85.3 17.7EVCAP (w/ WHOOPS)24.4 26.1 86.3 17.8", "figure_id": "tab_5", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Impact of the external memory size on the performance of EVCAP by evaluation under CIDEr scores. Changes in the size of external memory result in changes in performance. CIDEr scores after training EVCAP with the number of retrieved object names K from 1 to 20. The results indicate that the performance is relatively optimal when K is set to be 10.", "figure_data": "MethodNoCaps valFlickr30kInNearOutOverallTestLVIS objects (EVCAP) 111.7 119.5 116.5119.384.4-30% LVIS112.0 119.2 115.3118.885.0-60% LVIS111.4 119.1 116.2119.085.1-90% LVIS110.6 118.2 115.8118.383.6+ WHOOPS110.7 118.9 116.7119.084.9COCONoCapsFlickr30k140140.4 139.5139.3140.1 140.1139.6140.3120116.4 117.9118.7119.3 116.2116.8118.710084.7 85.284.984.485.184.784.88002468101214161820Number of retrieved object names (K)Figure 7.", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Analysis with different LLM decoders including GPT2, Vicuna-7B, and Vicuna-13B. The results reveal EVCAP is effective when applying it in different LLM decoders.", "figure_data": "MethodLLMCOCO testNoCaps val Flickr30k testCSCSCSSmallCap [35]GPT2119.7 21.3--60.6-ViECap [15]GPT292.9 18.2 66.29.5 47.913.6EVCAPGPT2131.0 23.2 97.6 13.3 70.616.1MiniGPT4 [46]Vicuna-7B119.4 23.5 108.7 15.7 73.917.2InstructBLIP [11] Vicuna-7B--123.1-82.4-EVCAPVicuna-7B139.0 24.7 116.8 15.3 82.718.0MiniGPT4 [46]Vicuna-13B 129.6 23.4 108.8 15.1 78.416.9InstructBLIP [11] Vicuna-13B--121.9-82.8-EVCAPVicuna-13B 140.1 24.7 119.3 15.3 84.418.0", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" } ]
Jiaxuan Li; Duc Minh Vo; Akihiro Sugimoto; Hideki Nakayama
[ { "authors": " Gt", "journal": "", "ref_id": "b0", "title": "SmallCap: A man in a racing car with a helmet on. MiniGPT4: A man driving a race car on a track", "year": "" }, { "authors": "Harsh Agrawal; Karan Desai; Yufei Wang; Xinlei Chen; Rishabh Jain; Mark Johnson; Dhruv Batra; Devi Parikh; Stefan Lee; Peter Anderson", "journal": "", "ref_id": "b1", "title": "Nocaps: Novel object captioning at scale", "year": "2019" }, { "authors": "Jean-Baptiste Alayrac; Jeff Donahue; Pauline Luc; Antoine Miech; Iain Barr; Yana Hasson; Karel Lenc; Arthur Mensch; Katherine Millican; Malcolm Reynolds", "journal": "", "ref_id": "b2", "title": "Flamingo: A visual language model for few-shot learning", "year": "2022" }, { "authors": "Peter Anderson; Xiaodong He; Chris Buehler; Damien Teney; Mark Johnson; Stephen Gould; Lei Zhang", "journal": "", "ref_id": "b3", "title": "Bottom-up and top-down attention for image captioning and visual question answering", "year": "2018" }, { "authors": "Jinze Bai; Shuai Bai; Shusheng Yang; Shijie Wang; Sinan Tan; Peng Wang; Junyang Lin; Chang Zhou; Jingren Zhou", "journal": "", "ref_id": "b4", "title": "Qwen-vl: A frontier large vision-language model with versatile abilities", "year": "2023" }, { "authors": "Nitzan Bitton-Guetta; Yonatan Bitton; Jack Hessel; Ludwig Schmidt; Yuval Elovici; Gabriel Stanovsky; Roy Schwartz", "journal": "", "ref_id": "b5", "title": "Breaking common sense: Whoops! a vision-andlanguage benchmark of synthetic and compositional images", "year": "2023" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "", "ref_id": "b6", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Xi Chen; Josip Djolonga; Piotr Padlewski; Basil Mustafa; Soravit Changpinyo; Jialin Wu; Carlos Riquelme Ruiz; Sebastian Goodman; Xiao Wang; Yi Tay", "journal": "", "ref_id": "b7", "title": "PaLI-X: On scaling up a multilingual vision and language model", "year": "2023" }, { "authors": "Xi Chen; Xiao Wang; Soravit Changpinyo; Piotr Piergiovanni; Daniel Padlewski; Sebastian Salz; Adam Goodman; Basil Grycner; Lucas Mustafa; Alexander Beyer; Joan Kolesnikov; Nan Puigcerver; Keran Ding; Hassan Rong; Gaurav Akbari; Linting Mishra; Ashish V Xue; James Thapliyal; Weicheng Bradbury; Mojtaba Kuo; Chao Seyedhosseini; Burcu Jia; Carlos Riquelme Karagol Ayan; Andreas Peter Ruiz; Anelia Steiner; Xiaohua Angelova; Neil Zhai; Radu Houlsby; Soricut", "journal": "", "ref_id": "b8", "title": "PaLI: A jointly-scaled multilingual languageimage model", "year": "2023" }, { "authors": "Wei-Lin Chiang; Zhuohan Li; Zi Lin; Ying Sheng; Zhanghao Wu; Hao Zhang; Lianmin Zheng; Siyuan Zhuang; Yonghao Zhuang; Joseph E Gonzalez", "journal": "", "ref_id": "b9", "title": "Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality", "year": "2004" }, { "authors": "Wenliang Dai; Junnan Li; Dongxu Li; Anthony Meng; Huat Tiong; Junqi Zhao; Weisheng Wang; Boyang Li; Pascale Fung; Steven Hoi", "journal": "", "ref_id": "b10", "title": "Instructblip: Towards generalpurpose vision-language models with instruction tuning", "year": "2023" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly; Jakob Uszkoreit; Neil Houlsby", "journal": "", "ref_id": "b11", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2021" }, { "authors": "Jiashuo Fan; Yaoyuan Liang; Leyao Liu; Shaolun Huang; Lei Zhang", "journal": "", "ref_id": "b12", "title": "Rca-noc: Relative contrastive alignment for novel object captioning", "year": "2023" }, { "authors": "Yuxin Fang; Wen Wang; Binhui Xie; Quan Sun; Ledell Wu; Xinggang Wang; Tiejun Huang; Xinlong Wang; Yue Cao", "journal": "", "ref_id": "b13", "title": "Eva: Exploring the limits of masked visual representation learning at scale", "year": "2023" }, { "authors": "Junjie Fei; Teng Wang; Jinrui Zhang; Zhenyu He; Chengjie Wang; Feng Zheng", "journal": "", "ref_id": "b14", "title": "Transferable decoding with visual entities for zero-shot image captioning", "year": "2023" }, { "authors": "Zhengcong Fei", "journal": "", "ref_id": "b15", "title": "Memory-augmented image captioning", "year": "2021" }, { "authors": "Agrim Gupta; Piotr Dollar; Ross Girshick", "journal": "", "ref_id": "b16", "title": "LVIS: A dataset for large vocabulary instance segmentation", "year": "2019" }, { "authors": "Ryuichiro Hataya; Han Bao; Hiromi Arai", "journal": "", "ref_id": "b17", "title": "Will largescale generative models corrupt future datasets?", "year": "2023" }, { "authors": "Anne Lisa; Subhashini Hendricks; Marcus Venugopalan; Raymond Rohrbach; Kate Mooney; Trevor Saenko; Darrell", "journal": "", "ref_id": "b18", "title": "Deep compositional captioning: Describing novel object categories without paired training data", "year": "2016" }, { "authors": "Ziniu Hu; Ahmet Iscen; Chen Sun; Zirui Wang; Kai-Wei Chang; Yizhou Sun; Cordelia Schmid; David A Ross; Alireza Fathi", "journal": "", "ref_id": "b19", "title": "Reveal: Retrieval-augmented visual-language pre-training with multi-source multimodal knowledge memory", "year": "2023" }, { "authors": "Jeff Johnson; Matthijs Douze; Hervé Jégou", "journal": "IEEE Transactions on Big Data", "ref_id": "b20", "title": "Billionscale similarity search with gpus", "year": "2019" }, { "authors": "Andrej Karpathy; Li Fei-Fei", "journal": "", "ref_id": "b21", "title": "Deep visual-semantic alignments for generating image descriptions", "year": "2015" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton ; Lee Kristina; Toutanova ", "journal": "", "ref_id": "b22", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Junnan Li; Dongxu Li; Caiming Xiong; Steven Hoi", "journal": "", "ref_id": "b23", "title": "Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation", "year": "2022" }, { "authors": "Junnan Li; Dongxu Li; Silvio Savarese; Steven Hoi", "journal": "", "ref_id": "b24", "title": "Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models", "year": "2023" }, { "authors": "Xiujun Li; Xi Yin; Chunyuan Li; Pengchuan Zhang; Xiaowei Hu; Lei Zhang; Lijuan Wang; Houdong Hu; Li Dong; Furu Wei", "journal": "", "ref_id": "b25", "title": "Oscar: Object-semantics aligned pre-training for vision-language tasks", "year": "2020" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "", "ref_id": "b26", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Jiasen Lu; Jianwei Yang; Dhruv Batra; Devi Parikh", "journal": "", "ref_id": "b27", "title": "Neural baby talk", "year": "2018" }, { "authors": "Ron Mokady; Amir Hertz; Amit H Bermano", "journal": "", "ref_id": "b28", "title": "Clipcap: Clip prefix for image captioning", "year": "2021" }, { "authors": "Liwei Bryan A Plummer; Chris M Wang; Juan C Cervantes; Julia Caicedo; Svetlana Hockenmaier; Lazebnik", "journal": "", "ref_id": "b29", "title": "Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models", "year": "2015" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b30", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "", "ref_id": "b31", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research (JMLR)", "ref_id": "b32", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Rita Ramos; Desmond Elliott; Bruno Martins", "journal": "", "ref_id": "b33", "title": "Retrievalaugmented image captioning", "year": "2023" }, { "authors": "Rita Ramos; Bruno Martins; Desmond Elliott; Yova Kementchedjhieva", "journal": "", "ref_id": "b34", "title": "Smallcap: Lightweight image captioning prompted with retrieval augmentation", "year": "2023" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b35", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar", "journal": "", "ref_id": "b36", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Laurens Van Der Maaten; Geoffrey Hinton", "journal": "Journal of machine learning research (JMLR)", "ref_id": "b37", "title": "Visualizing data using t-sne", "year": "2008" }, { "authors": "Subhashini Venugopalan; Lisa Anne Hendricks; Marcus Rohrbach; Raymond Mooney; Trevor Darrell; Kate Saenko", "journal": "", "ref_id": "b38", "title": "Captioning images with diverse objects", "year": "2017" }, { "authors": "Minh Duc; Hong Vo; Akihiro Chen; Hideki Sugimoto; Nakayama", "journal": "", "ref_id": "b39", "title": "Noc-rek: Novel object captioning with retrieved vocabulary from external knowledge", "year": "2022" }, { "authors": "Weihan Wang; Qingsong Lv; Wenmeng Yu; Wenyi Hong; Ji Qi; Yan Wang; Junhui Ji; Zhuoyi Yang; Lei Zhao; Xixuan Song; Jiazheng Xu; Bin Xu; Juanzi Li; Yuxiao Dong; Ming Ding; Jie Tang", "journal": "", "ref_id": "b40", "title": "Cogvlm: Visual expert for pretrained language models", "year": "2023" }, { "authors": "Kelvin Xu; Jimmy Ba; Ryan Kiros; Kyunghyun Cho; Aaron Courville; Ruslan Salakhudinov; Rich Zemel; Yoshua Bengio", "journal": "", "ref_id": "b41", "title": "Show, attend and tell: Neural image caption generation with visual attention", "year": "" }, { "authors": "Zhuolin Yang; Wei Ping; Zihan Liu; Vijay Korthikanti; Weili Nie; De-An Huang; Linxi Fan; Zhiding Yu; Shiyi Lan; Bo Li", "journal": "", "ref_id": "b42", "title": "Re-ViLM: Retrieval-augmented visual language model for zero and few-shot image captioning", "year": "2023" }, { "authors": "Michihiro Yasunaga; Armen Aghajanyan; Weijia Shi; Richard James; Jure Leskovec; Percy Liang; Mike Lewis; Luke Zettlemoyer; Wen-Tau Yih", "journal": "", "ref_id": "b43", "title": "Retrieval-augmented multimodal language modeling", "year": "2023" }, { "authors": "Pengchuan Zhang; Xiujun Li; Xiaowei Hu; Jianwei Yang; Lei Zhang; Lijuan Wang; Yejin Choi; Jianfeng Gao", "journal": "", "ref_id": "b44", "title": "Vinvl: Revisiting visual representations in vision-language models", "year": "2021" }, { "authors": "Deyao Zhu; Jun Chen; Xiaoqian Shen; Xiang Li; Mohamed Elhoseiny", "journal": "", "ref_id": "b45", "title": "Minigpt-4: Enhancing vision-language understanding with advanced large language models", "year": "2008" } ]
[ { "formula_coordinates": [ 3, 335.55, 619.49, 120.5, 12.2 ], "formula_id": "formula_0", "formula_text": "{k i 1 , k i 2 , • • • , k i 32 } = E(X i )." }, { "formula_coordinates": [ 4, 50.11, 234.35, 236.25, 28.48 ], "formula_id": "formula_1", "formula_text": "q ⊤ j k i ∥qj ∥∥k i ∥ , where i ∈ [1, M ], j ∈ [1," }, { "formula_coordinates": [ 4, 85.15, 364.62, 166.17, 35.1 ], "formula_id": "formula_2", "formula_text": "{k best j , v best j } = arg max k i SIM q j , k i , j = arg max j SIM(q j , k best j ), v l ← v best j ," }, { "formula_coordinates": [ 4, 50.11, 513.6, 198.45, 9.65 ], "formula_id": "formula_3", "formula_text": "S = {v 1 , [SEP], v 2 , [SEP], • • • , [SEP], v K }." }, { "formula_coordinates": [ 4, 331.52, 299.28, 196.29, 14.11 ], "formula_id": "formula_4", "formula_text": "L θ = - L i=1 log p θ (c i | w 1 , ...w N , c 1 , ..., c i-1" }, { "formula_coordinates": [ 5, 55.85, 314.95, 477.3, 14.52 ], "formula_id": "formula_5", "formula_text": "17B - - 149.1 - - - - - - - 127.0 - - - PaLI-X UL2-32B [8] 2.2B 55B - - 149.2 - - - - - - - 126.3 - - -" } ]
2023-11-27
[ { "figure_ref": [], "heading": "", "publication_ref": [ "b40", "b40", "b37" ], "table_ref": [], "text": "Perceptual metrics, such as LPIPS [40], better capture the perceptual quality. The proposed StableVSR enhances the perceptual quality in video super-resolution, leading to better visual results. Best results in bold text. PSNR: the higher, the better. LPIPS [40]: the lower, the better. Results using ×4 upscaling factor on Vimeo90K [37]." }, { "figure_ref": [], "heading": "Abstract", "publication_ref": [], "table_ref": [], "text": "In this paper, we address the problem of video superresolution (VSR) using Diffusion Models (DM), and present StableVSR. Our method significantly enhances the perceptual quality of upscaled videos by synthesizing realistic and temporally-consistent details. We turn a pretrained DM for single image super-resolution into a VSR method by introducing the Temporal Conditioning Module (TCM). TCM uses Temporal Texture Guidance, which provides spatially-aligned and detail-rich texture information synthesized in adjacent frames. This guides the generative process of the current frame toward high-quality and temporally-consistent results. We introduce a Frame-wise Bidirectional Sampling strategy to encourage the use of information from past to future and vice-versa. This strategy" }, { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b21", "b3", "b4", "b2", "b4", "b40", "b11", "b25", "b29", "b10", "b12", "b30", "b27", "b1", "b9", "b22", "b38", "b25", "b25" ], "table_ref": [], "text": "Video super-resolution (VSR) is the task of increasing the spatial resolution of a video by enhancing its level of detail and clarity [21]. Recently, many VSR methods based on deep learning techniques have been proposed [3,4,19]. However, these methods mainly focus on reconstruction quality, often ignoring perceptual quality. As a consequence, they may fail to match the fidelity expected at higher resolution [16]. According to the perceptiondistortion trade-off [2], improving reconstruction quality inevitably leads to a decrease in perceptual quality. As shown in Figure 1, frames generated by recent state-of-theart methods [4,19] have high reconstruction quality, with high PSNR values (the higher, the better), but are not perceptually photorealistic, and have high LPIPS [40] values (the lower, the better).\nInspired by the success of Diffusion Models (DMs) in generating high-quality images [6,11,25,29], several works have been recently proposed to address the problem of single image super-resolution (SISR) using DMs [10,12,17,30]. They show the effectiveness of DMs in synthesizing realistic textures and details, contributing to enhancing the perceptual quality of upscaled images [16]. Compared to SISR, VSR requires the integration of information from multiple closely related but misaligned frames to obtain temporal consistency over time. Unfortunately, applying frame-by-frame a SISR method to a video may lead to suboptimal results and introduces temporal inconsistency [27]. Different approaches to encourage temporal consistency in video generation using DMs have been recently studied [1,9,22,38] However, these methods do not address VSR and do not use fine-texture temporal guidance. As a consequence, they may fail to achieve temporal consistency at the fine-detail level, essential in the context of VSR.\nIn this paper, we address these problems and present Stable Video Super-Resolution (StableVSR), a novel method for VSR based on Latent Diffusion Models (LDMs) [25]. StableVSR enhances the perceptual quality of upscaled videos by synthesizing realistic and temporally-consistent details.\nStableVSR exploits a pre-trained LDM for SISR [25] to perform VSR by introducing the novel Temporal Conditioning Module (TCM). TCM guides the generative process of the current frame toward the generation of high-quality and temporally-consistent results over time. This is achieved by using the novel Temporal Texture Guidance, which provides TCM with spatially-aligned and detail-rich texture information from adjacent frames: at every sampling step t, the predictions of the adjacent frames are projected to their initial state, i.e. t = 0, and spatially aligned to the current frame. At inference time, StableVSR uses the novel Frame-wise Bidirectional Sampling strategy to avoid error accumulation problems and balance information propagation: a sampling step is first taken on all frames before advancing in sampling time, and information is alternatively propagated forward and backward in video time.\nIn summary, our main contributions are the following: " }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b3", "b4", "b18", "b21", "b27", "b32", "b35", "b37", "b32", "b42", "b35", "b32", "b33", "b3", "b4", "b3", "b18", "b33", "b18", "b11", "b25", "b29", "b10", "b12", "b28", "b30", "b30", "b28", "b30", "b25", "b12", "b10" ], "table_ref": [], "text": "Video super-resolution. Video super-resolution (VSR) based on deep learning has witnessed considerable advances in the past few years [3,4,18,19,21,27,32,35]. ToFlow [37] showed that optimizing a pre-trained motion estimation method with the rest of the framework leads to better results. TDAN [32] proposed the use of deformable convolutions [42] for spatial alignment as an alternative to optical flow computation. EDVR [35] extended the alignment module proposed in TDAN [32] to better handle large motion and used temporal attention [33] to balance the contribution of each frame. BasicVSR [3] revised the essential components for a VSR method, i.e. bidirectional information propagation and spatial feature alignment, and proposed a simple yet effective solution. BasicVSR++ [4] improved BasicVSR [3] by adding second-order grid propagation and flow-guided deformable alignment. VRT [18] adopted the attention mechanism [33] to better capture long-range frame dependencies and enable parallel frame predictions. RVRT [19] improved VRT [18] by integrating the advantages of recurrent networks and reducing model complexity.\nDiffusion models for single image super-resolution. The success of Diffusion Models (DMs) in image generation [6,11,25,29] inspired the development of single image superresolution (SISR) methods based on DMs [10,12,17,28,30]. SRDiff [17] and SR3 [30] demonstrate DMs can achieve impressive results in SISR. SR3+ [28] extended SR3 [30] to images in the wild by proposing a higher-order degradation scheme and noise conditioning augmentation. LDM [25] proposed to work in a VAE latent space [8] to reduce complexity requirements and training time. CMD [12] proposed to cascade multiple DMs to achieve SISR at arbitrary scales. IDM [10] proposed to introduce the implicit image function in the decoding part of a DM to achieve continuous super-resolution." }, { "figure_ref": [], "heading": "Background on Diffusion Models", "publication_ref": [ "b11" ], "table_ref": [], "text": "Diffusion Models (DMs) [11] convert a complex data distribution x 0 ∼ p data into a simple Gaussian distribution x T ∼ N (0, I), and then recover data from it. A DM is composed of two processes: diffusion process and reverse process." }, { "figure_ref": [], "heading": "Diffusion process.", "publication_ref": [], "table_ref": [], "text": "The diffusion process is a Markov chain that corrupts data x 0 ∼ p data until they approach Gaussian noise x T ∼ N (0, I) after T diffusion steps. It is defined as:\nq(x 1 , ..., x T |x 0 ) = T t=1 q(x t |x t-1 )(1)\nwhere t represents a diffusion step and q(x t |x t-1 ) = N (x t ; √ 1 -β t (x t-1 ), β t I), with β t being a fixed or learnable variance schedule. At any step t, x t can be directly sampled from x 0 as:\nx t = √ α t x 0 + √ 1 -α t ϵ(2)\nwhere α t = 1 -β t , α t = t i=1 α i and ϵ ∼ N (0, I). Reverse process. The reverse process is a Markov chain that removes noise from x T ∼ N (0, I) until data x 0 ∼ p data are obtained. It is defined as:\np θ (x 0 , ..., x T -1 |x T ) = T t=1 p θ (x t-1 |x t )(3)\nwhere p θ (x t-1 |x t ) = N (x t-1 ; µ θ (x t , t), Σ θ I). A neural network ϵ θ is trained to predict ϵ from x t , and it can be used to estimate µ θ (x t , t) as:\nµ θ (x t , t) = 1 √ α t x t - 1 -α t √ 1 -α t ϵ θ (x t , t)(4)\nAs a consequence, we can sample x t-1 ∼ p θ (x t-1 |x t ) as:\nx t-1 = 1 √ α t x t - 1 -α t √ 1 -α t ϵ θ (x t , t) + σ t z (5)\nwhere z ∼ N (0, I) and σ t is the variance schedule. In practice, according to Eq. 2 ,we can directly predict x0 from x t via projection to the initial state t = 0 as:\nx0 = 1 √ α t x t - √ 1 -α t ϵ θ (x t , t)(6)\nand then sample x t-1 using x 0 and x t as:\nx t-1 = √ α t-1 (1 -α t ) 1 -α t x0 + √ α t (1 -α t-1 ) 1 -α t x t +σ t z (7)\nwhere z ∼ N (0, I) and σ t is the variance schedule." }, { "figure_ref": [ "fig_1" ], "heading": "Methodology", "publication_ref": [ "b25", "b25" ], "table_ref": [], "text": "We present Stable Video Super-Resolution (StableVSR), a method for video super-resolution (VSR) based on Latent Diffusion Models (LDM) [25]. StableVSR enhances the perceptual quality in VSR through temporally-consistent detail synthesis. The overview of the method is shown in Figure 2. StableVSR is built upon a pre-trained LDM for single image super-resolution [25], which is turned into a VSR method through the design and addition of the Temporal Conditioning Module (TCM). TCM uses detail and structure information synthesized in adjacent frames to guide the generative process of the current frame. It allows to obtain high-quality and temporally-consistent frames over time. We design the Temporal Texture Guidance to provide TCM with rich texture information about the adjacent frames: at every sampling step, their predictions are projected to their initial state via Eq. 6, converted into RGB frames, and aligned with the current frame via optical flow estimation and motion compensation. We introduce in Sta-bleVSR the Frame-wise Bidirectional Sampling strategy, where a sampling step is taken on all frames advancing in sampling time, and information is alternatively propagated forward and backward in video time. This alleviates the problem of error accumulation and balances the information propagation over time." }, { "figure_ref": [], "heading": "Temporal Conditioning Module", "publication_ref": [ "b25", "b27", "b25", "b25", "b26", "b39" ], "table_ref": [], "text": "Applying frame-by-frame the SISR LDM [25] to videos introduces temporal inconsistency, as each frame is generated only based on the content of a single low-resolution frame. Moreover, this approach does not exploit the content shared among multiple video frames, leading to suboptimal results [27]. We address these problems by introducing the Temporal Conditioning Module (TCM) into the SISR LDM [25]. The goal is twofold: (1) enabling the use of spatio-temporal information from multiple frames; (2) enforcing temporal consistency across frames. We use the information generated by the SISR LDM [25] in the adjacent frames to guide the generation process of the current frame. Besides obtaining temporal consistency, this solution also provides additional sources of information to handle very small or occluded objects. TCM injects temporal conditioning into the decoder of the denoising UNet [26], as proposed in ControlNet [39]." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Temporal Texture Guidance", "publication_ref": [ "b1", "b22" ], "table_ref": [], "text": "The Temporal Texture Guidance provides the Temporal Conditioning Module with the texture information synthesized in adjacent frames. The goal is to guide the generative process of the current frame toward the generation of highquality and temporally-consistent results.\nGuidance on x0 . Using results of the previous sampling step {x t } N i=1 as guidance to predict {x t-1 } N i=1 , as proposed in [1,22], may not provide adequate texture information along the whole reverse process. This is because x t is corrupted by noise until t approaches 0, as shown in Figure 3. We address this problem by using a noise-free approximation of x t , i.e. x0 , to be used as guidance when taking a given sampling step t. This is achieved by projecting x t to its initial state, i.e. t = 0, using Eq 6. Since x0 ≈ x 0 , it contains very little noise. In addition, it provides detail-rich texture information that is gradually refined as t approaches 0, as shown in Figure 3." }, { "figure_ref": [], "heading": "Temporal conditioning.", "publication_ref": [], "table_ref": [], "text": "We need to use information synthesized in adjacent frames to ensure temporal consistency. We achieve this by using x0 obtained from the previous frame, i.e. xi-1 0 , as guidance when generating the current frame. Since xi-1 ϵ θ (x i-1 t , t, LR i-1 ) via Eq. 6, it contains the texture information synthesized in the previous frame at sampling step t.\n0 is computed from x i-1 t using x t x0 ||x 0 -x0 || ||HR -x 0 || t = 900 t = 500 t = 25" }, { "figure_ref": [], "heading": "Spatial alignment.", "publication_ref": [ "b3" ], "table_ref": [], "text": "According to [3], spatial alignment is essential to properly aggregate information from multiple frames. The texture information contained in xi-1 0 may not be spatially aligned with respect to the current frame due to applied to x0 Motion compensation applied to D(x 0 ) Formulation. Given the previous and the current lowresolution frames LR i-1 and LR i , the current sampling step t and the latent of the previous frame x i-1 t , the Temporal Texture Guidance HR i-1→i is computed as:\nHR i-1→i = MC(ME(LR i-1 , LR i ), D(x i-1 0 )) (8\n)\nwhere MC is the motion compensation function, ME is the motion estimation method, D is the VAE decoder [8] and xi-1 0 is computed via Eq. 6 using ϵ θ (x i-1 t , t, LR i-1 )." }, { "figure_ref": [], "heading": "Frame-wise Bidirectional Sampling strategy", "publication_ref": [ "b38", "b3" ], "table_ref": [], "text": "Progressing all the sampling steps on one frame and using the result as guidance for the next frame in an autoregressive manner, as proposed in [38], may introduce the problem of error accumulation. In addition, unidirectional information propagation from past to future frames may lead to suboptimal results [3]. We address these problems by proposing the Frame-wise Bidirectional Sampling strategy: we take a given sampling step t on all the frames before taking the next sampling step t -1, alternatively propagating information forward and backward in video time.\nThe pseudocode is detailed in Algorithm 1. Given the latent x i t at a sampling step t, the Temporal Texture Guidance HR 1: for i = 1 to N do 2:\nx i T = N (0, I)\n3: end for 4: for t = T to 1 do 5:\nfor i = 1 to N do ▷ Take a given sampling step on all the frames 6:\nHR i-1→i = MC(ME(LR i-1 , LR i ), D(x i-10\n)) if i > 1 ▷ Eq. 8\n7:\nε = ϵ θ (x i t , t, LR i , HR i-1→i ) if i > 1 else ϵ θ (x i t , t, LR i )\n8:\nxi 0 = 1 √ α t x i t - √ 1 -αtε ▷ Eq. 6\n9:\nz = N (0, I) if t > 1 else 0 10:\nx i t-1 = 1 √ α t x i t - 1-α t √ 1-α t ε + σtz ▷ Eq. 5\n11:\nend for" }, { "figure_ref": [], "heading": "12:", "publication_ref": [], "table_ref": [], "text": "Reverse sequence order of {xt-1} N , {x0} N and {LR} N 13: end for 14:\nreturn {HR} N = {D(x0)} N Algorithm 2\nTraining procedure. ME and MC are \"motion estimation\" and \"motion compensation\", respectively.\nInput: Dataset D with (LR, HR) pairs; pre-trained ϵ θ for SISR, method for ME.\n1: repeat 2:\n(LR i-1 , HR i-1 ), (LR i , HR i ) ∼ D 3:\nx i-1 0 , x i 0 = E(HR i-1 ), E(HR i )\n4:\nϵ i-1 , ϵ i ∼ N (0, I)" }, { "figure_ref": [], "heading": "5:", "publication_ref": [], "table_ref": [], "text": "t ∼ {0, ..., T }" }, { "figure_ref": [], "heading": "6:", "publication_ref": [], "table_ref": [], "text": "εi\n-1 = ϵ θ ( √ αtx i-1 0 + √ 1 -αtϵ i-1 , t, LR i-1 )" }, { "figure_ref": [], "heading": "7:", "publication_ref": [], "table_ref": [], "text": "xi-1\n0 = 1 √ α t x i t - √ 1 -αtε i-1 ▷ Eq. 6 8: HR i-1→i = MC(ME(LR i-1 , LR i ), D(x i-10\n)) ▷ Eq. 8" }, { "figure_ref": [], "heading": "9:", "publication_ref": [ "b3", "b4" ], "table_ref": [], "text": "Take gradient descent step on: 10:\n∇ θ (||ϵ i -ϵ θ ( √ αtx i 0 + √ 1 -αtϵ i , t, LR i , HR i-1→i )||)\n11: until convergence not use the Temporal Conditioning Module during forward and backward propagation, respectively. This is in line with other methods [3,4]." }, { "figure_ref": [], "heading": "Training procedure", "publication_ref": [ "b25", "b39", "b37", "b32", "b3", "b18", "b4", "b35", "b3", "b18", "b4" ], "table_ref": [], "text": "StableVSR is built upon a pre-trained LDM for single image super-resolution [25], hence we only need to train the Temporal Conditioning Module. We extend the Control-Net [39] training procedure by adding an additional step to compute the Temporal Texture Guidance HR i-1→i from the previous frame to be used for the current one. The pseudocode is detailed in Algorithm 2. Given two (LR, HR) pairs of consecutive frames (LR i-1 , HR i-1 ) and (LR i , HR i ), we first compute x i-1 0 and x i 0 by converting HR i-1 and HR i into the latent space using the VAE encoder E [8]. We add ϵ ∼ N (0, I) to x i-1 0 via Eq. 2, obtaining x i-1 t . We then compute xi-1 0 using x i-1 t and ϵ θ (x i-1 t , t, LR i-1 ) via Eq. 6, and we obtain HR i-1→i to be used for the current frame via Eq. 8. The training objective is: Sequence 82, clip 798 (Vimeo-90K [37]) HR Bicubic TDAN [32] BasicVSR [3] VRT [18] BasicVSR++ [4] RVRT [19] StableVSR (Ours)\nE t,x i 0 ,ϵ,LR i , HR i-1→i [||ϵ -ϵ θ (x i t , t, LR i , HR i-1→i )||],(9)\nClip 015, frame 38 (REDS [24]) HR Bicubic EDVR [35] BasicVSR [3] VRT [18] BasicVSR++ [4] RVRT [19] StableVSR (Ours) where t ∼ [1, T ] and x i t is obtained by adding ϵ ∼ N (0, I) to x i 0 via Eq. 2." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Implementation details", "publication_ref": [ "b25", "b39", "b31", "b11" ], "table_ref": [], "text": "StableVSR is built upon Stable Diffusion ×4 upscaler 1 (SD×4Upscaler) [25], which uses the low-resolution images as guidance via concatenation. SD×4Upscaler uses a VAE decoder [8] with ×4 upscaling factor to perform superresolution. We use the same decoder in our StableVSR. The 1 https : / / huggingface . co / stabilityai / stablediffusion-x4-upscaler architecture details are reported in the supplementary material. In all our experiments, the results are referred to ×4 super-resolution. We add the Temporal Conditioning Module via ControlNet [39] and train it for 20000 steps. We use RAFT [31] for optical flow computation. We use 4 NVIDIA Quadro RTX 6000 for our experiments. We use the Adam optimizer [14] with a batch size set to 32 and the learning rate fixed to 1e -5. Randomly cropped patches of size 256 × 256 with horizontal flip are used as data augmentation. We use DDPM [11] sampling with T = 1000 during training and T = 50 during inference." }, { "figure_ref": [], "heading": "Datasets and evaluation metrics", "publication_ref": [ "b37", "b37", "b3", "b4", "b40", "b7", "b13", "b34", "b23", "b36", "b15", "b13", "b34", "b23", "b40", "b7", "b36", "b15" ], "table_ref": [], "text": "We adopt two benchmark datasets: Vimeo-90K [37] and REDS [24]. Vimeo-90K [37] contains 91701 7-frame video sequences at 448 × 256 resolution. It covers a broad range of actions and scenes. Among these sequences, 64612 are used for training and 7824 for testing. REDS [24] is a realistic and dynamic scene dataset containing 300 video sequences. Each sequence has 100 frames at 1280 × 720 resolution. Following previous work [3,4], we use sequences 000, 011, 015, and 020 for testing and all the others for training. We use a variety of perceptual metrics, including LPIPS [40], DISTS [7], MUSIQ [13], CLIP-IQA [34] and NIQE [23], to evaluate the perceptual quality of StableVSR results. We also report reconstruction metrics like PSNR and SSIM [36] for reference. We adopt Warping Error (WE) [15] for the evaluation of temporal consistency. MUSIQ [13], CLIP-IQA [34] and NIQE [23] are no-reference metrics, while LPIPS [40], DISTS [7], PSNR, SSIM [36] and WE [15] are full-reference metrics." }, { "figure_ref": [ "fig_5" ], "heading": "Comparison with state-of-the-art methods", "publication_ref": [ "b37", "b35", "b32", "b3", "b18", "b4", "b36", "b36", "b2" ], "table_ref": [ "tab_1", "tab_1" ], "text": "We compare StableVSR with other state-of-the-art methods including ToFlow [37], EDVR [35], TDAN [32], Ba-sicVSR [3], VRT [18] BasicVSR++ [4] and RVRT [19]. Since only PSNR and SSIM [36] are evaluated in the official papers, we use the results obtained using the pre-trained models to evaluate the other metrics. The quantitative comparison is reported in Table 1. As shown, StableVSR outperforms the other methods considering all the perceptual metrics. This is also confirmed by the qualitative results shown in Figure 5: the frames upscaled by StableVSR look more natural and realistic. StableVSR, due to its generative nature, is the only method able to synthesize information that cannot be found in the spatio-temporal frame neighborhood. This is because it captures the semantics of the scenes and synthesizes missing information accordingly. In Table 1, we can observe StableVSR has poorer performance in PSNR and SSIM [36]. This is in line with the perceptiondistortion trade-off [2]. We report additional results in the supplementary material." }, { "figure_ref": [ "fig_6" ], "heading": "Impact of sampling steps", "publication_ref": [ "b36", "b40", "b7", "b13", "b34", "b23", "b15" ], "table_ref": [], "text": "We study how the performance changes as the number of sampling steps increases. Figure 6 shows the results obtained by increasing the number of sampling steps from 10 to 100. Reconstruction quality (PSNR and SSIM [36]) deteriorates with more sampling steps. Conversely, perceptual quality (LPIPS [40], DISTS [7], MUSIQ [13], CLIP-IQA [34] and NIQE [23]) improves. We can attribute this behavior to the iterative refinement process of Diffusion Models, which progressively refines realistic image details that may not be perfectly aligned with the reference. In addition, since frames obtained using very few steps are blurry, the temporal consistency measured via WE [15] is higher. According to these results, 50 sampling steps represent a good balance between perceptual quality and temporal consistency." }, { "figure_ref": [ "fig_3", "fig_8" ], "heading": "Ablation study", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "Temporal Texture Guidance. Figure 7 shows the results obtained by removing one of the operations in the Temporal Texture Guidance. Using guidance on x t instead of x0 leads to very noisy frames. These noisy frames cannot provide adequate information when t is far from 0. With no motion compensation, the spatial information is not aligned with respect to the current frame and cannot be properly used. Applying motion compensation in the latent space introduces distortions in the guidance, as also shown in Figure 4. In all these cases, temporal consistency at fine-detail level cannot be achieved. The proposed approach provides detail-rich and spatially-aligned texture guidance at every sampling step t, leading to better temporal consistency.\nFrame-wise Bidirectional Sampling strategy. We compare the proposed Frame-wise Bidirectional Sampling strat- egy with: single-frame sampling, i.e. no temporal conditioning; auto-regressive sampling, i.e. the previous upscaled frame is used as guidance for the current one; framewise unidirectional sampling, i.e. only forward information propagation. The results are quantitatively and qualitatively evaluated in Table 2 and Figure 8, respectively. Singleframe sampling leads to poor results and introduces temporal inconsistency due to the differences in the synthesized frame details. The auto-regressive approach has the problem of error accumulation, which is propagated to the next frames. Unidirectional sampling unbalances the information propagation, as only future frames receive information from the past ones, limiting the overall performance. The proposed Frame-wise Bidirectional Sampling solves these problems, leading to better and more consistent results." }, { "figure_ref": [], "heading": "Discussion and limitations", "publication_ref": [ "b2", "b10", "b30", "b5", "b20", "b21", "b27" ], "table_ref": [], "text": "Reconstruction quality results. We focus on using Diffusion Models (DMs) to enhance the perceptual quality in video super-resolution (VSR). Improving perceptual quality inevitably leads to a decrease in reconstruction quality [2]. Recent works on single image super-resolution using DMs [10,17,30] reported lower reconstruction quality when compared to regression-based methods [5,20]. Although most VSR methods target reconstruction quality, several studies [21,27] highlighted the urgent need to address perceptual quality. We take a step in this direction." }, { "figure_ref": [], "heading": "Model complexity.", "publication_ref": [ "b26", "b41" ], "table_ref": [], "text": "The complexity of the denoising UNet [26] we use in StableVSR is ×20 higher than the compared methods, increasing training time and memory occupation requirements. The iterative refinement process of DMs inevitably increases inference time. StableVSR takes about 100 seconds to upscale a video to a 1280 × 720 target resolution on a NVIDIA Quadro RTX A6000 using 50 sampling steps. In future works, we plan to incorporate current research in speeding up DMs [41]." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "We proposed to enhance the perceptual quality in video super-resolution (VSR) through the synthesis of temporally-consistent details using Diffusion Models (DMs), and presented StableVSR. We turned a pre-trained DM for single-image super-resolution into a VSR method by introducing a Temporal Conditioning Module (TCM). It uses Temporal Texture Guidance with spatially-aligned and detail-rich texture information from adjacent frames to guide the generative process of the current frame toward the generation of high-quality results and ensure temporal consistency. We adopted a Frame-wise Bidirectional Sampling strategy at inference time to further improve percep-tual quality and temporal consistency. We compared Sta-bleVSR with existing state-of-the-art methods for VSR, and showed that it better enhances the perceptual quality of upscaled frames both quantitatively and qualitatively." }, { "figure_ref": [], "heading": "Enhancing Perceptual Quality in Video Super-Resolution through", "publication_ref": [], "table_ref": [], "text": "Temporally-Consistent Detail Synthesis using Diffusion Models Supplementary Material " }, { "figure_ref": [], "heading": "Additional experiments 8.1. Additional implementation details", "publication_ref": [ "b39", "b26" ], "table_ref": [ "tab_3" ], "text": "We report the StableVSR architecture details in Table 3.\nFollowing ControlNet [39], we freeze the weights of the Denoising UNet [26] during training. We only train the Temporal Conditioning Module (TCM) for video adaptation. We apply spatial guidance on the low-resolution frame via concatenation, i.e. the noisy latent x i t (4 channels) is directly concatenated with the low-resolution frame LR i (3 channels). The temporal guidance is instead provided via TCM, which receives Temporal Texture Guidance " }, { "figure_ref": [ "fig_11" ], "heading": "Additional ablation study", "publication_ref": [], "table_ref": [], "text": "Temporal Texture Guidance.\nWe extend the ablation study on Temporal Texture Guidance by reporting the quantitative results on the ablated components in Table 4. Without using the guidance on x0 (\"No guidance on x0 \" row), i.e. directly using x t , a strong temporal inconsistency is introduced. The lack of proper mechanisms for spatial alignment, such as in the case of no motion compensation (\"No MC\" row) or motion compensation in the VAE latent space [8] (\"No Latent → RGB conversion\" row), limits the overall frame quality and temporal consistency. In comparison, the proposed Temporal Texture Guidance, which includes all the aforementioned operations, leads to better quality and temporal consistency. Figure 9 shows that only the proposed Temporal Texture Guidance ensures temporal consistency at the fine-detail level over time. " }, { "figure_ref": [ "fig_13", "fig_13", "fig_13" ], "heading": "Additional comparison with the state-of-the-art", "publication_ref": [ "b37", "b32", "b3", "b18", "b4", "b37", "b35", "b3", "b18", "b4" ], "table_ref": [], "text": "Figure 10 shows an additional qualitative comparison with state-of-the-art methods on Vimeo90K [37] (Figure 10a) and REDS [24] (Figure 10b). We can observe StableVSR is the only method that correctly upscales complex textures while the other methods fail, producing blurry details.\nReference HR Bicubic TDAN [32] BasicVSR [3] VRT [18] BasicVSR++ [4] RVRT [19] StableVSR (a) Results on Vimeo90k [37].\nReference HR Bicubic EDVR [35] BasicVSR [3] VRT [18] BasicVSR++ [4] RVRT [19] StableVSR " } ]
improves the perceptual quality of the results and the temporal consistency across frames. We demonstrate the effectiveness of StableVSR in enhancing the perceptual quality of upscaled videos compared to existing state-of-theart methods for VSR. The code is available at https: //github.com/claudiom4sir/StableVSR.
Enhancing Perceptual Quality in Video Super-Resolution through Temporally-Consistent Detail Synthesis using Diffusion Models
[ { "figure_caption": "Figure 1 .1Figure 1. Reconstruction metrics, such as PSNR, only evaluate the pixel-wise difference and do not correlate well with human perception.Perceptual metrics, such as LPIPS[40], better capture the perceptual quality. The proposed StableVSR enhances the perceptual quality in video super-resolution, leading to better visual results. Best results in bold text. PSNR: the higher, the better. LPIPS[40]: the lower, the better. Results using ×4 upscaling factor on Vimeo90K[37].", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Overview of the proposed StableVSR. We turn a single image super-resolution LDM [25] into a video super-resolution method through the Temporal Conditioning Module (TCM) (Section 4.1). TCM exploits the Temporal Texture Guidance HR i-1→i (Section 4.2). It provides TCM with spatially-aligned and detail-rich texture information synthesized in adjacent frames to guide the generative process of the current frame toward detail-rich and temporally-consistent results over time. The sampling step is taken using the Frame-wise Bidirectional Sampling strategy (Section 4.3). D represents the VAE decoder [8]. Green lines refer to progression in sampling time, while blue lines refer to progression in video time.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Comparison between guidance on xt and x0. Compared to xt (first column), x0 computed via Eq. 6 contains very little noise regardless of the sampling step t (second column). We can observe x0 is closer to x0 as t decreases (third column). Here, x0 corresponds to the last sampling step, i.e. when t = 1. In addition, x0 increases its level of detail as t decreases (fourth column).", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Comparison between applying motion compensation to x0 in the latent space and to D(x0) in the pixel domain. D represents the VAE decoder [8]. In the first scenario, visible artifacts are introduced.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Temporal Conditioning Module is alternatively computed via Eq. 8 using xi-1 0 or xi+1 0 , respectively related to the previous or the next frame. Information is propagated forward and backward in video time: the current frame is conditioned by past frames during forward propagation, and by future frames during backward propagation. The first and the last frames of the sequence do Algorithm 1 Frame-wise Bidirectional Sampling strategy. ME and MC are \"motion estimation\" and \"motion compensation\", respectively.Input: Sequence of low-resolution frames{LR} N ; pre-trained ϵ θ for VSR, VAE decoder D; method for ME.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Qualitative comparison with state-of-the-art methods for VSR. The proposed StableVSR better enhances the perceptual quality of the upscaled frames by synthesizing more realistic details.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Performance changes as the number of sampling steps increases. The x axis represents sampling steps, while the y axis metric values. Perceptual metrics are marked with ⋆, reconstruction metrics with ⋄, and temporal consistency metrics with •. Increasing the sampling steps improves perceptual quality while deteriorating reconstruction quality and temporal consistency. Results computed on center crops of 512 × 512 target resolution of REDS [24].", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "10 Frame 11 Figure 7 .10117Figure 7. Ablation experiments for the Temporal Texture Guidance. Only the proposed solution can provide detail-rich and spatially-aligned texture information from adjacent frames at every sampling step t. MC refers to \"motion compensation\". For \"No guidance on x0\" experiment, we use guidance on xt. For \"No Latent→RGB conversion\" experiment, the aligned latent is converted to RGB just for visualization.", "figure_data": "", "figure_id": "fig_7", "figure_label": "10117", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Qualitative comparison of different sampling strategies. Single-frame sampling introduces temporal inconsistency. Autoregressive sampling shows the error accumulation problem. The proposed bidirectional propagation solves both the problems.", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "HR i-1→i as input (3 channels). The VAE decoder D [8] receives the final latent, i.e. x i 0 , of a frame i as input, and converts it into an RGB frame. This latent-to-RGB conversion applies ×4 upscaling, hence the output of the decoder represents the upscaled frame. The overall number of parameters in StableVSR (including the VAE decoder [8]) is about 712 million.", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Table 4 .4Additional ablation experiments for Temporal Texture Guidance, quantitative results. Perceptual metrics are marked with ⋆, reconstruction metrics with ⋄, and temporal consistency metrics with •. Best results in bold text. We also report single-image results as the baseline. The proposed Temporal Texture Guidance leads to significant improvements in temporal consistency. MC refers to \"motion compensation\". For \"No guidance on x0\" experiment, we use guidance on xt. Results computed on center crops of 512 × 512 target resolution of REDS [24].", "figure_data": "", "figure_id": "fig_10", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 9 .9Figure 9. Additional ablation experiments for the Temporal Texture Guidance. We show the results obtained on three consecutive frames. Only the proposed solution ensures temporal consistency at the fine-detail level over time. Results on sequence 015 of REDS [24].", "figure_data": "", "figure_id": "fig_11", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "(b) Results on REDS [24].", "figure_data": "", "figure_id": "fig_12", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 10 .10Figure 10. Additional qualitative comparison with state-of-the-art methods. Our results are shown in the last column (StableVSR).", "figure_data": "", "figure_id": "fig_13", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "• We present StableVSR: the first work that approaches VSR under a generative paradigm using LDMs. It significantly enhances the perceptual quality of upscaled videos while ensuring temporal consistency; • We design Temporal Texture Guidance containing detailrich and spatially-aligned texture information synthesized in adjacent frames. It guides the generative process of the current frame toward the generation of detailed and temporally consistent frames; • We introduce Frame-wise Bidirectional Sampling strategy with forward and backward information propagation.", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Quantitative comparison with state-of-art methods. Perceptual metrics are marked with ⋆ and reconstruction metrics with ⋄. Best results in bold text. All the perceptual metrics highlight the proposed StableVSR achieves better perceptual quality.", "figure_data": "Bicubic0.2890.20923.270.3588.4429.750.8480.4530.18626.890.3046.8526.130.729ToFlow [37]0.1520.15040.790.3648.0532.280.898-------EDVR [35]-------0.1780.08265.440.3674.1531.020.879TDAN [32]0.1200.12246.540.3867.3434.100.919-------BasicVSR [3]0.1030.11348.970.3767.2735.180.9310.1650.08165.740.3714.0631.390.891VRT [18]0.0840.10051.080.3897.1136.350.9420.1730.08165.680.3744.1931.590.889BasicVSR++ [4]0.0920.10550.110.3837.1235.690.9370.1310.06867.000.3813.8732.380.907RVRT [19]0.0880.10150.450.3877.1236.300.9420.1280.06767.440.3923.7832.740.911StableVSR (Ours)0.0700.08750.970.4145.9931.970.8770.0970.04567.540.4172.7327.970.800", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Quantitative comparison of different sampling strategies. Perceptual metrics are marked with ⋆, reconstruction metrics with ⋄, and temporal consistency metrics with •. Best results in bold text. Here we only report full-reference metrics. We can see the proposed Frame-wise Bidirectional Sampling strategy leads to better results.", "figure_data": "Single-frame2.440.1210.05526.320.732Auto-regressive1.600.1190.05926.480.743Unidirectional1.570.1000.04727.740.788Bidirectional1.510.0970.04527.970.800", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Architectural details of StableVSR.", "figure_data": "Denoising UNet Temporal Conditioning Module VAE decoderDownscaling×8×8-Upscaling×8-×4Input channels734Output channels4-3TrainableNoYesNoParameters473 M207 M32 M", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "WE• ↓ LPIPS⋆ ↓ DISTS⋆ ↓ PSNR⋄ ↑ SSIM⋄ ↑", "figure_data": "Baseline (single-image)2.440.1210.05526.320.732No guidance on x03.110.1350.06525.690.706No motion compensation2.510.1160.05126.610.751No Latent → RGB conv.2.460.1130.50026.650.753Proposed1.510.0970.04527.970.800", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" } ]
Claudio Rota; Marco Buzzelli; Joost Van De Weijer
[ { "authors": "", "journal": "BasicVSR++", "ref_id": "b0", "title": "StableVSR (ours)", "year": "" }, { "authors": "Andreas Blattmann; Robin Rombach; Huan Ling; Tim Dockhorn; Seung Wook Kim; Sanja Fidler; Karsten Kreis", "journal": "", "ref_id": "b1", "title": "Align your latents: High-resolution video synthesis with latent diffusion models", "year": "2023" }, { "authors": "Yochai Blau; Tomer Michaeli", "journal": "", "ref_id": "b2", "title": "The perception-distortion tradeoff", "year": "2018" }, { "authors": "Xintao Kelvin Ck Chan; Ke Wang; Chao Yu; Chen Change Dong; Loy", "journal": "", "ref_id": "b3", "title": "Basicvsr: The search for essential components in video super-resolution and beyond", "year": "2021" }, { "authors": "Shangchen Kelvin Ck Chan; Xiangyu Zhou; Chen Change Xu; Loy", "journal": "", "ref_id": "b4", "title": "Basicvsr++: Improving video superresolution with enhanced propagation and alignment", "year": "2022" }, { "authors": "Yinbo Chen; Sifei Liu; Xiaolong Wang", "journal": "", "ref_id": "b5", "title": "Learning continuous image representation with local implicit image function", "year": "2021" }, { "authors": "Prafulla Dhariwal; Alexander Nichol", "journal": "Advances in neural information processing systems", "ref_id": "b6", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "Keyan Ding; Kede Ma; Shiqi Wang; Eero P Simoncelli", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b7", "title": "Image quality assessment: Unifying structure and texture similarity", "year": "2020" }, { "authors": "Patrick Esser; Robin Rombach; Bjorn Ommer", "journal": "", "ref_id": "b8", "title": "Taming transformers for high-resolution image synthesis", "year": "2021" }, { "authors": "Patrick Esser; Johnathan Chiu; Parmida Atighehchian; Jonathan Granskog; Anastasis Germanidis", "journal": "", "ref_id": "b9", "title": "Structure and content-guided video synthesis with diffusion models", "year": "2023" }, { "authors": "Sicheng Gao; Xuhui Liu; Bohan Zeng; Sheng Xu; Yanjing Li; Xiaoyan Luo; Jianzhuang Liu; Xiantong Zhen; Baochang Zhang", "journal": "", "ref_id": "b10", "title": "Implicit diffusion models for continuous super-resolution", "year": "2023" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "Advances in neural information processing systems", "ref_id": "b11", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Jonathan Ho; Chitwan Saharia; William Chan; David J Fleet; Mohammad Norouzi; Tim Salimans", "journal": "The Journal of Machine Learning Research", "ref_id": "b12", "title": "Cascaded diffusion models for high fidelity image generation", "year": "2022" }, { "authors": "Junjie Ke; Qifei Wang; Yilin Wang; Peyman Milanfar; Feng Yang", "journal": "", "ref_id": "b13", "title": "Musiq: Multi-scale image quality transformer", "year": "2021" }, { "authors": "Diederik Kingma; Jimmy Ba", "journal": "", "ref_id": "b14", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "Wei-Sheng Lai; Jia-Bin Huang; Oliver Wang; Eli Shechtman; Ersin Yumer; Ming-Hsuan Yang", "journal": "", "ref_id": "b15", "title": "Learning blind video temporal consistency", "year": "2018" }, { "authors": "Christian Ledig; Lucas Theis; Ferenc Huszár; Jose Caballero; Andrew Cunningham; Alejandro Acosta; Andrew Aitken; Alykhan Tejani; Johannes Totz; Zehan Wang", "journal": "", "ref_id": "b16", "title": "Photorealistic single image super-resolution using a generative adversarial network", "year": "2017" }, { "authors": "Haoying Li; Yifan Yang; Meng Chang; Shiqi Chen; Huajun Feng; Zhihai Xu; Qi Li; Yueting Chen", "journal": "Neurocomputing", "ref_id": "b17", "title": "Srdiff: Single image super-resolution with diffusion probabilistic models", "year": "2022" }, { "authors": "Jingyun Liang; Jiezhang Cao; Yuchen Fan; Kai Zhang; Rakesh Ranjan; Yawei Li; Radu Timofte; Luc Van Gool", "journal": "", "ref_id": "b18", "title": "Vrt: A video restoration transformer", "year": "2022" }, { "authors": "Jingyun Liang; Yuchen Fan; Xiaoyu Xiang; Rakesh Ranjan; Eddy Ilg; Simon Green; Jiezhang Cao; Kai Zhang; Radu Timofte; Luc V Gool", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b19", "title": "Recurrent video restoration transformer with guided deformable attention", "year": "2022" }, { "authors": "Bee Lim; Sanghyun Son; Heewon Kim; Seungjun Nah; Kyoung Mu; Lee ", "journal": "", "ref_id": "b20", "title": "Enhanced deep residual networks for single image super-resolution", "year": "2017" }, { "authors": "Hongying Liu; Zhubo Ruan; Peng Zhao; Chao Dong; Fanhua Shang; Yuanyuan Liu; Linlin Yang; Radu Timofte", "journal": "Artificial Intelligence Review", "ref_id": "b21", "title": "Video super-resolution based on deep learning: a comprehensive survey", "year": "2022" }, { "authors": "Zhengxiong Luo; Dayou Chen; Yingya Zhang; Yan Huang; Liang Wang; Yujun Shen; Deli Zhao; Jingren Zhou; Tieniu Tan", "journal": "", "ref_id": "b22", "title": "Videofusion: Decomposed diffusion models for high-quality video generation", "year": "2023" }, { "authors": "Anish Mittal; Rajiv Soundararajan; Alan C Bovik", "journal": "IEEE Signal processing letters", "ref_id": "b23", "title": "Making a \"completely blind\" image quality analyzer", "year": "2012" }, { "authors": "Seungjun Nah; Sungyong Baik; Seokil Hong; Gyeongsik Moon; Sanghyun Son; Radu Timofte; Kyoung Mu; Lee ", "journal": "", "ref_id": "b24", "title": "Ntire 2019 challenge on video deblurring and superresolution: Dataset and study", "year": "2019" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b25", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Olaf Ronneberger; Philipp Fischer; Thomas Brox", "journal": "Springer", "ref_id": "b26", "title": "Unet: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "Claudio Rota; Marco Buzzelli; Simone Bianco; Raimondo Schettini", "journal": "Artificial Intelligence Review", "ref_id": "b27", "title": "Video restoration based on deep learning: a comprehensive survey", "year": "2023" }, { "authors": "Hshmat Sahak; Daniel Watson; Chitwan Saharia; David Fleet", "journal": "", "ref_id": "b28", "title": "Denoising diffusion probabilistic models for robust image super-resolution in the wild", "year": "" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily L Denton; Kamyar Ghasemipour; Raphael Gontijo Lopes; Burcu Karagol Ayan; Tim Salimans", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b29", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Chitwan Saharia; Jonathan Ho; William Chan; Tim Salimans; David J Fleet; Mohammad Norouzi", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b30", "title": "Image super-resolution via iterative refinement", "year": "2022" }, { "authors": "Zachary Teed; Jia Deng", "journal": "Springer", "ref_id": "b31", "title": "Raft: Recurrent all-pairs field transforms for optical flow", "year": "2020" }, { "authors": "Yapeng Tian; Yulun Zhang; Yun Fu; Chenliang Xu", "journal": "", "ref_id": "b32", "title": "Tdan: Temporally-deformable alignment network for video super-resolution", "year": "2020" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b33", "title": "Attention is all you need", "year": "2017" }, { "authors": "Jianyi Wang; Kelvin Ck Chan; Chen Change Loy", "journal": "", "ref_id": "b34", "title": "Exploring clip for assessing the look and feel of images", "year": "2023" }, { "authors": "Xintao Wang; Kelvin Ck Chan; Ke Yu; Chao Dong; Chen Change Loy", "journal": "", "ref_id": "b35", "title": "Edvr: Video restoration with enhanced deformable convolutional networks", "year": "2019" }, { "authors": "Zhou Wang; Alan C Bovik; Hamid R Sheikh; Eero P Simoncelli", "journal": "IEEE transactions on image processing", "ref_id": "b36", "title": "Image quality assessment: from error visibility to structural similarity", "year": "2004" }, { "authors": "Tianfan Xue; Baian Chen; Jiajun Wu; Donglai Wei; William T Freeman", "journal": "International Journal of Computer Vision", "ref_id": "b37", "title": "Video enhancement with task-oriented flow", "year": "2019" }, { "authors": "Sihyun Yu; Kihyuk Sohn; Subin Kim; Jinwoo Shin", "journal": "", "ref_id": "b38", "title": "Video probabilistic diffusion models in projected latent space", "year": "2023" }, { "authors": "Lvmin Zhang; Anyi Rao; Maneesh Agrawala", "journal": "", "ref_id": "b39", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" }, { "authors": "Richard Zhang; Phillip Isola; Alexei A Efros; Eli Shechtman; Oliver Wang", "journal": "", "ref_id": "b40", "title": "The unreasonable effectiveness of deep features as a perceptual metric", "year": "2018" }, { "authors": "Hongkai Zheng; Weili Nie; Arash Vahdat; Kamyar Azizzadenesheli; Anima Anandkumar", "journal": "PMLR", "ref_id": "b41", "title": "Fast sampling of diffusion models via operator learning", "year": "2023" }, { "authors": "Xizhou Zhu; Han Hu; Stephen Lin; Jifeng Dai", "journal": "", "ref_id": "b42", "title": "Deformable convnets v2: More deformable, better results", "year": "2019" } ]
[ { "formula_coordinates": [ 3, 99.87, 377.5, 186.49, 30.2 ], "formula_id": "formula_0", "formula_text": "q(x 1 , ..., x T |x 0 ) = T t=1 q(x t |x t-1 )(1)" }, { "formula_coordinates": [ 3, 117, 466.83, 169.36, 17.63 ], "formula_id": "formula_1", "formula_text": "x t = √ α t x 0 + √ 1 -α t ϵ(2)" }, { "formula_coordinates": [ 3, 89.15, 554.23, 197.21, 30.2 ], "formula_id": "formula_2", "formula_text": "p θ (x 0 , ..., x T -1 |x T ) = T t=1 p θ (x t-1 |x t )(3)" }, { "formula_coordinates": [ 3, 79.05, 638.83, 207.32, 23.59 ], "formula_id": "formula_3", "formula_text": "µ θ (x t , t) = 1 √ α t x t - 1 -α t √ 1 -α t ϵ θ (x t , t)(4)" }, { "formula_coordinates": [ 3, 73.87, 692.67, 212.5, 24.66 ], "formula_id": "formula_4", "formula_text": "x t-1 = 1 √ α t x t - 1 -α t √ 1 -α t ϵ θ (x t , t) + σ t z (5)" }, { "formula_coordinates": [ 3, 355.05, 118.62, 190.06, 24.76 ], "formula_id": "formula_5", "formula_text": "x0 = 1 √ α t x t - √ 1 -α t ϵ θ (x t , t)(6)" }, { "formula_coordinates": [ 3, 313.84, 170.22, 231.27, 30.24 ], "formula_id": "formula_6", "formula_text": "x t-1 = √ α t-1 (1 -α t ) 1 -α t x0 + √ α t (1 -α t-1 ) 1 -α t x t +σ t z (7)" }, { "formula_coordinates": [ 4, 147.12, 351.53, 385.14, 363.48 ], "formula_id": "formula_7", "formula_text": "0 is computed from x i-1 t using x t x0 ||x 0 -x0 || ||HR -x 0 || t = 900 t = 500 t = 25" }, { "formula_coordinates": [ 5, 77.09, 400.29, 205.4, 16.15 ], "formula_id": "formula_8", "formula_text": "HR i-1→i = MC(ME(LR i-1 , LR i ), D(x i-1 0 )) (8" }, { "formula_coordinates": [ 5, 282.49, 405.94, 3.87, 8.64 ], "formula_id": "formula_9", "formula_text": ")" }, { "formula_coordinates": [ 5, 346.72, 170.75, 136.66, 10.95 ], "formula_id": "formula_10", "formula_text": "HR i-1→i = MC(ME(LR i-1 , LR i ), D(x i-10" }, { "formula_coordinates": [ 5, 346.82, 181.66, 176.19, 10.78 ], "formula_id": "formula_11", "formula_text": "ε = ϵ θ (x i t , t, LR i , HR i-1→i ) if i > 1 else ϵ θ (x i t , t, LR i )" }, { "formula_coordinates": [ 5, 347.21, 188.65, 197.9, 16.37 ], "formula_id": "formula_12", "formula_text": "xi 0 = 1 √ α t x i t - √ 1 -αtε ▷ Eq. 6" }, { "formula_coordinates": [ 5, 346.72, 217.17, 198.39, 13.22 ], "formula_id": "formula_13", "formula_text": "x i t-1 = 1 √ α t x i t - 1-α t √ 1-α t ε + σtz ▷ Eq. 5" }, { "formula_coordinates": [ 5, 308.86, 254.72, 109.03, 37.77 ], "formula_id": "formula_14", "formula_text": "return {HR} N = {D(x0)} N Algorithm 2" }, { "formula_coordinates": [ 5, 336.26, 334.26, 99.67, 9.76 ], "formula_id": "formula_15", "formula_text": "x i-1 0 , x i 0 = E(HR i-1 ), E(HR i )" }, { "formula_coordinates": [ 5, 336.26, 343.97, 61.8, 7.76 ], "formula_id": "formula_16", "formula_text": "ϵ i-1 , ϵ i ∼ N (0, I)" }, { "formula_coordinates": [ 5, 343.69, 357.5, 148.74, 13.27 ], "formula_id": "formula_17", "formula_text": "-1 = ϵ θ ( √ αtx i-1 0 + √ 1 -αtϵ i-1 , t, LR i-1 )" }, { "formula_coordinates": [ 5, 314.62, 367.22, 230.49, 28.09 ], "formula_id": "formula_18", "formula_text": "0 = 1 √ α t x i t - √ 1 -αtε i-1 ▷ Eq. 6 8: HR i-1→i = MC(ME(LR i-1 , LR i ), D(x i-10" }, { "formula_coordinates": [ 5, 346.72, 400.25, 182.15, 13.1 ], "formula_id": "formula_19", "formula_text": "∇ θ (||ϵ i -ϵ θ ( √ αtx i 0 + √ 1 -αtϵ i , t, LR i , HR i-1→i )||)" }, { "formula_coordinates": [ 5, 317.87, 697.37, 227.24, 19.58 ], "formula_id": "formula_20", "formula_text": "E t,x i 0 ,ϵ,LR i , HR i-1→i [||ϵ -ϵ θ (x i t , t, LR i , HR i-1→i )||],(9)" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "Current tracking operations for flightline assets such as people, aircraft, and support equipment, are performed through manual, visual inspection. There is no central log for the position of these assets, nor is there a capability to assess their real-time position besides line-ofsight verification. Relying on visual asset tracking on Navy flight lines inhibits efficiency.\nReadiness, sortie generation rate, and safety could be improved through enhanced situational awareness of command and control elements. Industry standard alternatives to visual asset tracking have not solved all these issues because placing tracking devices on equipment creates electronic emissions. They also can require configuration changes to hardware (i.e., technical directives) which limits their applicability to Navy aircraft. There exist new technology solutions to allow passive tracking of assets using computer vision software, as well as commercial-off-the-shelf hardware to track personnel and support equipment.\nIn this work, two methods are presented which provide situational awareness for tracking assets on the flightline. In the first method, hardware and software were developed to track the position of people and support equipment with GPS sensors over a LoRaWAN installation. In the second method, aircraft were tracked using passive computer vision software and sticker decals called AprilTags." }, { "figure_ref": [], "heading": "Background", "publication_ref": [ "b0", "b1", "b2", "b2", "b3" ], "table_ref": [], "text": "Global Positioning System (GPS) tracking systems are used by companies to track the location and movement of assets, such as delivery vehicles and packages. GPS tracking systems use a network of satellites to determine NAVAIR Public Release 2023-020 Distribution Statement A -\"Approved for public release; distribution is unlimited\" the precise location of a device on the ground. GPS tracking systems can be used for a variety of purposes, including fleet management, asset tracking, and logistics. These systems allow real-time visibility and awareness of the current position and expected arrival time of packages [1]. The widespread use of GPS and ethical implications of its use have been explored in [2].\nAprilTags are a popular marker-based visual fiducial system, developed by the University of Michigan. AprilTags are small, distinctive black and white square tags that can be attached to objects or printed on flat surfaces. They are designed to be easily detected and recognized by machine vision algorithms, even when partially occluded or under challenging lighting conditions [3].\nAprilTags are used for a variety of applications, including robotic localization and mapping, augmented reality, and object tracking. They are particularly useful for robotics applications because they can be easily detected and recognized at a distance, even when the robot is moving [3].\nLoRaWAN (Long Range Wide Area Network) is a type of wireless communication technology that is designed for low-power, long-range communication. It is commonly used for Internet of Things (IoT) applications, such as asset tracking, smart city infrastructure, and remote sensing. LoRaWAN operates in the unlicensed radio frequency spectrum and uses a spread spectrum modulation technique called chirp spread spectrum (CSS) to transmit data over long distances with low power consumption. It is designed to provide bi-directional communication over a range of several kilometers, depending on the specific implementation and operating conditions.\nOne of the main advantages of LoRaWAN is its low power consumption, which allows it to be used with battery-powered devices that need to operate for long periods of time without the need for frequent battery replacements. It is also well suited for applications that require long-range communication, as it can transmit data over distances that are beyond the range of other wireless technologies such as WiFi or Bluetooth.\nLoRaWAN is typically used in conjunction with gateways that connect the network to the Internet and allow devices to communicate with each other and with cloud-based applications. It is an open standard that is widely adopted for IoT applications [4]." }, { "figure_ref": [ "fig_0", "fig_3", "fig_4", "fig_5", "fig_6" ], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "The first phase of the effort involved developing and installing GPS tracking devices on a custom LoRaWAN network at NAS Oceana. The GPS positions would relay their position to users over a custom user interface on a laptop connected to the network as seen in Figure 1. The purpose of this phase was to maintain situational awareness for personnel and support equipment. In the second phase, a prototype fiducial tracking system was created to track the realtime position of aircraft.\nFirst, the environmental scope of the tracking system was established. These included the field of view, depth of field, and weather conditions. The field of view and depth of field were determined based on the testing environment at United States Naval Test Pilot School (USNTPS) depicted below in Figure 5. The weather conditions would exclude extreme weather, but could include variable sunny or cloudy conditions, or even light rain. During the week of testing, there was light rain and variable sunny and cloudy conditions.\nThe camera was selected to try to optimize for the maximum number of aircrafts captured in view. The Panasonic LUMIX DC-BGHI camera was used to capture video in this experiment. It is a box style camera that allowed for 4K recording of videos. The Rokinon 12mm T2.2 Cine Lens for Micro Four Thirds Mount lens was used in this experiment. This is a wide angled lens for digital cinematography. A wide angle lens was chosen to increase the field of view being covered by the camera. The AprilTag family selected was 52h13 which was the most robust, freely available AprilTag family at the time.\nAprilTag detection software was developed to track multiple AprilTags of various sizes. This software was integrated with a user interface to show the location of the aircraft relative to the camera. The information was stored for post processing, playback, and analysis. A direct linear transform (DLT) was used to calibrate the location of the camera system from a set of known points in the video.\nThe AprilTags were initially tested at Lakehurst on the Experimental Ground Vehicle (EGV) depicted below in Figure 6. Different materials NAVAIR Public Release 2023-020 Distribution Statement A -\"Approved for public release; distribution is unlimited\" were used to generate different sized April Tags. Matte black and white tags were used for their high detection rate, and because these materials were approved to be used in experiments on aircraft at USNTPS. The setup was further tested on real at USNTPS. Over the course of a week, multiple AprilTags of varying sizes were attached to multiple aircrafts as shown in Figure 7. An example tagged aircraft can be seen in Figure 8. Next, the cameras were set up to detect the AprilTags while they taxied past the cameras as shown in Figure 9. The footage was then processed, and detections were translated into 3D coordinate estimates without relying on other sensors such as GPS. The tags were removed with minimal impact on the aircrafts' exteriors at the end of the experiment." }, { "figure_ref": [ "fig_7", "fig_8" ], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "The GPS sensors were able to report the realtime location of personnel and support equipment over the LoRaWAN network at NAS Oceana. The trajectories of all tagged assets were able to be replayed during post analysis as shown in Figure 10 below. Several decals were installed on multiple aircrafts at the United States Naval Test Pilot School. The application tools (squeegees) were used to ensure flat application. However, small air bubbles were still present upon close inspection. Tags of multiple sizes were used, depending on the size of the free space available on the aircraft. To meet flight criteria, the tags did not cover any existing control surfaces which limited their size and location.\nSeveral videos were recorded at USNTPS and processed by the AprilTag detecting software. An example processed image is shown in Figure 11. The software was moderately successful at detecting the AprilTags, depending on the size of the tag, angle to the tag, and distance to aircraft. The light rain and glare from the sun did not have a significant impact on detection. In the future, it is recommended to have cameras with a stronger optical zoom to allow for detections at greater distances. Multiple AprilTag colors and materials were tested on the EGV. The color did not significantly impact performance. The application of the decal proved to be more critical than the color; a flat application with strong adherence was crucial to avoid warping. AprilTag detection occurred in real time at 1080p, but more computing power and parallelization is needed to process 4K videos in real time. The 4K videos were first recorded and processed afterwards.\nThe further the tags were from the camera, the more uncertain the tag location and pose estimates. The wide-angle lens allowed for capturing more aircraft in a single image, but the tag detections were less robust due to the increased field of view. A different lens with a greater focal length, or a second camera dedicated to zooming, would allow for more robust detection." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "Overall, this effort demonstrated the capability to track people and support equipment in real time using GPS sensors and a LoRaWAN network and aircraft in real time using AprilTags and COTS camera hardware.\nCOTS GPS sensors were successfully used to track personnel and support equipment over a custom LoRaWAN installation. The software showed the ability to have real-time awareness for the states of assets as well as provided the capability for virtual playback of past events.\nNAVAIR Public Release 2023-020 Distribution Statement A -\"Approved for public release; distribution is unlimited\"\nFiducial decals were successfully used to identify and locate aircraft. The methods used to manufacture, apply, track, and remove decals met requirements.\nIn the future, it is recommended to apply the dark AprilTags onto a white continuous background prior to applying the background and the tag onto to the aircraft together. This method would be easier and would remove colors or patterns pre-existing on the aircraft from interfering with the AprilTag Detection. It is recommended to use Pan Tilt Zoom (PTZ) cameras to have improve the maximum detection distance while maintaining thorough coverage of the flightline." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "The authors would like to acknowledge Alex Wendt and Christopher Thajudeen for their efforts in constructing and designing the LoRaWAN network and GPS sensors. The authors would also like to acknowledge Daniel Bramos for his effort in interfacing with NAS Oceana and USNTPS and managing the demonstrations. The authors would also like to acknowledge Jianyu An, Kevin Larkins, and Tushar Patel for helping perform experiments. Ari Goodman is the S&T AI Lead and a Robotics Engineer in the Robotics and Intelligent Systems Engineering (RISE) lab at Naval Air Warfare Center Aircraft Division (NAWCAD) Lakehurst. In this role he leads efforts in Machine Learning, Computer Vision, and Verification & Validation of Autonomous Systems. He received his MS in Robotics Engineering from Worcester Polytechnic Institute in 2017. Ryan O'Shea is a Computer Engineer in the Robotics and Intelligent Systems Engineering (RISE) lab at Naval Air Warfare Center Aircraft Division (NAWCAD) Lakehurst. His current work is focused on applying computer vision, machine learning, and robotics to various areas of the fleet to augment sailor capabilities and increase overall operational efficiency. He received a Bachelor's Degree in Computer Engineering from Stevens Institute of Technology." } ]
Real-time situational awareness for the location of assets is critical to ensure missions are completed efficiently and requirements are satisfied. In many commercial settings, the application of global positioning system (GPS) sensors is appropriate to achieve timely knowledge of the position of people and equipment. However, GPS sensors are not appropriate for all situations due to flight clearance and operations security concerns. LIFT OFF: LoRaWAN Installation and Fiducial Tracking Operations for the Flightline of the Future proposes a hybrid framework solution to achieve real-time situational awareness for people, support equipment, and aircraft positions regardless of the environment. This framework included a machine-vision component, which involved setting up cameras to detect AprilTag decals that were installed on the sides of aircraft. The framework included a geolocation sensor component, which involved installing GPS sensors on support equipment and helmets. The framework also included creating a long-range wide area network (LoRaWAN) to transfer data and developing a user interface to display the data. The framework was tested at Naval Air Station Oceana Flightline, the United States Naval Test Pilot School, and at
[ { "figure_caption": "Figure 1 :1Figure 1: User Interface and Live Plot for GPS LocationsGPS trackers were purchased and installed in different locations depending on the application. For personnel, the tracker was installed in the helmet shown in Figure2that is commonly worn on the flight line. For support equipment like fuel trucks, a housing was 3D printed to store the electronics and attached to the wind shield as shown in Figure3. In total, 22 GPS units were attached to various support equipment and personnel.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :Figure 3 :23Figure 2: Helmets for Flightline Personnel", "figure_data": "", "figure_id": "fig_1", "figure_label": "23", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: LoRaWAN Architecture", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Overview of USNTPS", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Experimental Ground Vehicle with multiple AprilTags and Detections The experiments with the EGV validated the following equation used to estimate maximum detection distance: 𝐷𝑖𝑠𝑡𝑎𝑛𝑐𝑒 = 𝑡 2 * 𝑡𝑎𝑛( 𝑏 * 𝑓 * 𝑝 2 * 𝑟 ) Max detection distance in meters [5] t = size of tag in meters b = number of bits that span the width of the tag f = horizontal FOV p = the number of pixels required to detect a bit r = horizontal resolution For these validating experiments, AprilTags were applied to the asset and the camera was moved away until the AprilTags were no longer detected.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Applying AprilTag to Aircraft", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Camera Setup Capturing Aircraft Footage from Indoors", "figure_data": "", "figure_id": "fig_6", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Example Replay of GPS Trajectory", "figure_data": "", "figure_id": "fig_7", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Aircraft Detection with Updating Map", "figure_data": "", "figure_id": "fig_8", "figure_label": "11", "figure_type": "figure" } ]
Ari Goodman; Ryan O'shea
[ { "authors": "G Mintsis; S Basbas; P Papaioannou; C Taxiltaris; I N Tziavos", "journal": "European journal of operational Research", "ref_id": "b0", "title": "Applications of GPS technology in the land transportation system", "year": "2004" }, { "authors": "S A Inks; T W Loe", "journal": "Marketing Management Journal", "ref_id": "b1", "title": "The ethical perceptions of salespeople and sales managers concerning the use of GPS tracking systems to monitor salesperson activity", "year": "2005" }, { "authors": "E Olson", "journal": "IEEE", "ref_id": "b2", "title": "AprilTag: A robust and flexible visual fiducial system", "year": "2011-05" }, { "authors": "J Haxhibeqiri; E De Poorter; I Moerman; J Hoebeke", "journal": "Sensors", "ref_id": "b3", "title": "A survey of LoRaWAN for IoT: From technology to application", "year": "2018" }, { "authors": "D Nugent", "journal": "", "ref_id": "b4", "title": "Designing the perfect apriltag. Optitag", "year": "2020-06-29" } ]
[]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "Tracking of aircraft, people, and support equipment to update the Ouija board is performed manually through visual inspection by several sailors. This method leads to data delays, errors, and reduced situational awareness. This task is labor intensive, requires intense focus for extended periods of time, and requires multiple personnel stationed during all flight operations. The Program of Record has identified a need for a technology to enable automatic tracking of assets on deck for over a decade. PATRIOT is a new solution allow for faster, more accurate, and less laborious asset tracking, as well as to enable future efforts focused on optimization, logging, and robust tracking.\nIn this work, the detection, classification, and pose estimation components for PATRIOT are presented along with experiments to quantify their training and performance." }, { "figure_ref": [], "heading": "Background", "publication_ref": [ "b0" ], "table_ref": [], "text": "PATRIOT's main tasking is broken down into three tasks: detection, classification, and pose estimation. The detection task involves identifying regions in the image and associating them with objects of interest. The classification task involves applying a class label to each object detected in the image. The pose estimation task involves estimating the 3D position and orientation of an object from its 2D image. There are numerous challenges in these tasks including perspective distortion, variable lighting conditions, noisy data, and occlusions [1]." }, { "figure_ref": [], "heading": "Detection and classification algorithms have been the focus of many computer vision efforts.", "publication_ref": [ "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b8", "b7", "b8", "b9" ], "table_ref": [], "text": "There are several freely available state-of-the-art open-source solutions such a YOLO [2] and unlimited\" Faster R-CNN [3]. Although well-trained, freely available models may need retraining, also called transfer learning or fine-tuning, to meet performance requirements on new datasets [4,5].\nAlthough pose estimation is not as well studied as detection and classification, several methods have been developed for estimating the pose of people and aircraft. Many state-of-the-art methods in single object pose estimation break down the process into two main steps. First, the algorithms identify the location of key features of objects, such as the nose or wingtips of an aircraft. This step is referred to as the feature detection or keypoint detection step. Then, the pose is calculated using Perspective-n-Point (PnP) solvers [6]. PnP algorithms traditionally use the known 3D points, corresponding image points, and camera parameters to estimate the real-world pose of the known object. PnP algorithms attempt to minimize the error between the projected points in 2D and the measured points in the image [7].\nThree state-of-the-art methods for keypoint detection are HRNet [8], HHRNet [9], OpenPifPaf [10].\nThere are pros and cons for each approach. In general, top-down methods like HRNet tend to be more accurate when dealing with large changes in scale of objects in an image because the bounding box step essentially normalizes the scale. Bottom-up methods, on the other hand, are more accurate when dealing with overlapping objects. In terms of processing speed, because bottom-up methods like HHRNet and OpenPifPaf don't have the additional step of running an object detector first, their prediction times can be faster [9].\nAll three algorithms have been demonstrated to achieve strong performance on a variety of benchmarks and have been used in a wide range of applications. However, their performance can vary based on the task and dataset, so it was unclear how they would perform with PATRIOT's datasets [8,9,10].\nAn alternative approach to pose estimation is to use direct linear transforms (DLT) and decoders. DLT works by projecting points onto an image place using known intrinsic and extrinsic camera parameters. DLT can also be used to estimate the camera parameters using two sets of known 3D and 2D points. Encoder-decoder networks are a type of deep learning architecture, but are traditionally smaller than the aforementioned HRNet, HHRNet, and OpenPifPaf networks, and therefore may require less data to train. Encoderdecoder networks consist of two main parts: an encoder, which processes the input data and encodes it into a compact representation, and a decoder, which takes the encoded representation and converts it back into the desired output. In the context of PATRIOT, an encoder-decoder network could be used to estimate the orientation of aircraft in images. The encoder would process the input image and extract relevant features, such as the shape and texture of the aircraft, while the decoder would learn to use these features to estimate the orientation of the aircraft." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "Four pipelines were developed, each which are distinguished by their respective use of HRNet, Higher HRNet, OpenPifPaf, and a decoder. Each pipeline was designed to take in an image and output a list of objects with pose, class, and confidences.\nIn this work, an experiment is included comparing all pipelines on a common synthetic dataset. Another experiment compares three OpenPifPaf models on real-world data; one model was trained on synthetic data, another on real-world, and a final model was trained on both real-world and synthetic data." }, { "figure_ref": [ "fig_0", "fig_2" ], "heading": "Frameworks", "publication_ref": [ "b10" ], "table_ref": [], "text": "Four frameworks were used to train and evaluate candidate algorithms for pose estimation. In the first approach shown in Figure 1, an image was passed to a detection and classification component, Faster R-CNN. The output from Faster R-CNN and the image was passed to a keypoint detection model, HRNet. The keypoint detection model produced a list of keypoint locations found for the object in the image. Next, the keypoints and class were passed into a PnP solver to estimate the realworld position of all the keypoints [11]. Finally, the keypoints and their labels were passed to a Singular Value Decomposition (SVD) solving algorithm to minimize the error between translating, rotating, and scaling a known point cloud set of keypoints to the estimated keypoints. In the third and fourth approaches shown in Figures 3 and4, an image was passed into bottom-up algorithms HHRNet or OpenPifPaf. These algorithms directly estimated the sets of keypoints for each object. Then, the keypoints and class were passed into the same PnP and SVD components as in the first framework." }, { "figure_ref": [ "fig_4", "fig_5", "fig_6" ], "heading": "Datasets", "publication_ref": [ "b11", "b12", "b13" ], "table_ref": [], "text": "A combination of real-world data and synthetic data was used to develop and test the pose estimation pipeline.\nA small assortment of video imagery previously recorded during the EATS TEMPALT aboard the U.S.S. Truman, CVN-75 was labeled and used as a real-world dataset. A panoramic camera assembly and two additional fixed cameras mounted on the island provided full video coverage of the flight deck. A real-world dataset was created with 4,964 annotated images with an additional 553 used for validation and testing. Synthetic data was used in this project because only a small amount of real-world data was available. In addition, synthetic data allowed for the quick and accurate labeling of images, as well as full control over images that would be difficult to capture, such as unique aircraft configurations, lighting, or weather. and open-source 3D creation suite [12]. Freely available open-source models of the aircraft carrier and F-18 were used. The carrier model was created by Alexdark [13] and the F/A-18 Hornet was created by KuhnIndustries [14]. An example of the synthetic environment can be seen in Figure 6 and a result of a synthetic rendered camera image is shown in Figure 7. The rendered camera image is a simulated version of the panoramic camera that is installed on the actual carrier. It is made up of a series of 5 cameras co-located next to each other on the carrier island. 17 key features on the aircraft were identified to train the keypoint detection methods based on geometric location. Future work could address the identification of features that are most easily detected. The 17 keypoints are highlighted in a skeleton in Figure 8. " }, { "figure_ref": [ "fig_7" ], "heading": "Faster R-CNN", "publication_ref": [], "table_ref": [], "text": "Faster R-CNN is an object detector model that uses a convolutional neural network (CNN) based architecture. The Faster R-CNN architecture detected and classified aircraft in images. It output a list of bounding boxes, classes, and confidences. Figure 9 shows Faster R-CNN working on real-world data.\nFor Faster R-CNN, the training parameters were: train_batch_size = 1, num_epochs = 10, lr = 0.005, momentum = 0.9, weight_decay = 0.005." }, { "figure_ref": [ "fig_7" ], "heading": "Direct Linear Transform", "publication_ref": [], "table_ref": [], "text": "The DLT algorithm projects points onto an image place using known intrinsic and extrinsic camera parameters. Given a 2D point, DLT estimates the corresponding camera ray in 3D. In PATRIOT's dataset, it was assumed that objects are on the carrier deck, and therefore have a known Z height. Therefore, DLT was used to solve for the real-world X, Y, and Z position of a 2D point in the image. An example image with Faster R-CNN working with the DLT is shown in Figure 9. DLT was also used to estimate the camera parameters using two sets of corresponding, known 3D and 2D points. DLT attempted to find the camera parameters that minimized the error to project the points from one space to the other. The authors mapped dozens of known points from the ship environment to the 2D image to calibrate the cameras." }, { "figure_ref": [ "fig_8", "fig_0", "fig_1" ], "heading": "Encoder-Decoder Network", "publication_ref": [ "b14" ], "table_ref": [], "text": "The encoder-decoder network utilized a standard convolution autoencoder structure with the addition of a second yaw estimation head. The encoder portion of the network learned to use convolution layers to extract information from NAVAIR Public Release 2023-019 Distribution Statement A -\"Approved for public release; distribution is unlimited\" the image and distill it down into an encoded representation vector. The decoder portion of the network learned to use inverse convolution layers to reconstruct the input image from the encoded representation vector; the reconstruction loss between images was used as a measure of confidence. Part of the encoded representation vector is also used by the yaw estimation head to calculate the yaw of the object. The yaw estimation head was structured as a fully connected neural network that terminated in overlapping yaw range bins. Each yaw bin represented a set range of rotations that an object could possibly take on. After the most confident bin is selected, the network regressed the final yaw of the object based on the predefined bin centers. An overview of the algorithm is shown in Figure 10. Three loss functions were used during training and could be weighted to increase or decrease focus on specific tasks. The three loss functions were decoder image reconstruction loss, bin selection loss, and rotational offset loss. The ADAM optimizer was used in conjunction with the three loss functions to train the network for a set number of epochs [15].\nThe encoder-decoder's performance on a synthetic dataset is shown in Figures 11 and12 The HRNet architecture was used to detect keypoints from the images provided by Faster R-CNN. It output a list of keypoint heatmaps. For each heatmap, the pixel with the highest heat value was selected as the keypoint location.\nThe training parameters used were: batch_size_per_gpu: 8, shuffle: true, begin_epoch: 0, end_epoch: 120, optimizer: adam, lr: 0.0005, lr_factor: 0.1, lr_step: -90 -110 wd: 0.0001, gamma1: 0.99, gamma2: 0.0, momentum: 0.9" }, { "figure_ref": [ "fig_11" ], "heading": "OpenPifPaf", "publication_ref": [], "table_ref": [], "text": "OpenPifPaf is a bottom-up keypoint detection approach based on ResNet with two head networks. The OpenPifPaf architecture was used to detect keypoints from the entire image. An example of OpenPifPaf working on preliminary real-world data is shown in Figure 13. workers=8 -val-interval=100 -weight-decay=1e-5" }, { "figure_ref": [ "fig_0", "fig_3" ], "heading": "Higher HRNet", "publication_ref": [], "table_ref": [], "text": "HHRNet is also a bottom-up keypoint detection approach. The HHRNet architecture was used to detect keypoints from the entire image. A selected result quantifying its performance on real-world data is shown in Figure 14. An experiment was conducted to compare the various pose estimation methods. A synthetic environment in Blender was chosen for its controllability and ease of creation. A video was created in which two aircraft moved throughout the scene to test each algorithm when the aircraft were partially and fully occluded. An example image taken from the processed demonstration video is shown in Figure 15." }, { "figure_ref": [], "heading": "How Useful is Synthetic Data?", "publication_ref": [], "table_ref": [], "text": "Three OpenPifPaf models were evaluated on a real-world dataset. First, an OpenPifPaf model was trained on a synthetic Blender data set. Second, a model was trained on a subset of the real-world dataset. Third, the synthetically trained model was retrained on a subset of realworld data." }, { "figure_ref": [ "fig_14" ], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "Figure Running the synthetic data trained OpenPifPaf model on real-world data did not generate consistent or useable results, however it did show some promise even though it had only been trained on synthetic data. OpenPifPaf only trained on real-world data had consistent and useable results, but the best results were from the model trained with synthetic and real-world data. As shown in Figure 18, there were more keypoint detections and the confidence level is higher. This experiment demonstrated that the combination of synthetic data with real-world data was helpful to improve keypoint detection." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "A pipeline and framework for detection, classification, and pose estimation of assets in real-world and synthetic data was successfully demonstrated. A keypoint detection model trained on synthetic data plus real-world data can detect keypoints in real-world footage.\nA synthetic environment was demonstrated to be useful in performing experiments and supplementing real-world data when training pose estimation software.\nThe full pipeline and SOTA algorithms within meet requirements to detect, classify, and estimate the pose of aircraft in real-time when properly trained on the appropriate data. The best performing algorithm for speed and accuracy was OpenPifPaf.\nThe limited data set that was annotated for proof of concept in this project will be expanded to improve the performance of the machine learning models, as well as to allow for fusion and tracking" }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "The authors would like to acknowledge Vitaly Ablavsky from the University of Washington for his assistance in designing the encoder-decoder network. The authors would also like to acknowledge Ric Rey Vergara, Mark Blair, and Daniel Vidal for interfacing with Navy personnel." }, { "figure_ref": [], "heading": "Ari", "publication_ref": [], "table_ref": [], "text": "Goodman is the S&T AI Lead and a Robotics Engineer in the Robotics and Intelligent Systems Engineering (RISE) lab at Naval Air Warfare Center Aircraft Division (NAWCAD) Lakehurst. In this role he leads efforts in Machine Learning, Computer Vision, and Verification & Validation of Autonomous Systems. He received his MS in Robotics Engineering from Worcester Polytechnic Institute in 2017." }, { "figure_ref": [], "heading": "", "publication_ref": [ "b3" ], "table_ref": [], "text": "Dr. James Hing serves as the Branch Head of the Strategic Technologies Branch, NAWCAD Lakehurst, where he leads a team of 25 engineers, including 4 " } ]
Deck tracking performed on carriers currently involves a team of sailors manually identifying aircraft and updating a digital user interface called the Ouija Board. Improvements to the deck tracking process would result in increased Sortie Generation Rates, and therefore applying automation is seen as a critical method to improve deck tracking. However, the requirements on a carrier ship do not allow for the installation of hardware-based location sensing technologies like Global Positioning System (GPS) sensors. PATRIOT (Panoramic Asset Tracking of Real-Time Information for the Ouija Tabletop) is a research effort and proposed solution to performing deck tracking with passive sensing and without the need for GPS sensors. PATRIOT is a prototype system which takes existing camera feeds, calculates aircraft poses, and updates a virtual Ouija board interface with the current status of the assets. PATRIOT would allow for faster, more accurate, and less laborious asset tracking for aircraft, people, and support equipment. PATRIOT is anticipated to benefit the warfighter by reducing cognitive workload, reducing manning requirements, collecting data to improve logistics, and enabling an automation gateway for future efforts to improve efficiency and safety. The authors have developed and tested algorithms to perform pose estimations of assets in real-time including OpenPifPaf, High-Resolution Network (HRNet), HigherHRNet (HHRNet), Faster R-CNN, and in-house developed encoder-decoder network. The software was tested with synthetic and realworld data and was able to accurately extract the pose of assets. Fusion, tracking, and real-world generality are planned to be improved to ensure a successful transition to the fleet.
NAVAIR Public Release 2023-019 Distribution Statement A -"Approved for public release; distribution
[ { "figure_caption": "Figure 1 :1Figure 1: HRNet Framework", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Decoder FrameworkIn the second approach shown in Figure2, the output from the detector and classifier, Faster R-CNN, was instead passed to a decoder network and a Direct Linear Transform (DLT) component. The decoder network was trained to estimate the orientation of a single object in the image. It was assumed that the object was on the carrier deck (Z=0). The DLT component estimated the real-world X and Y position of the object under the assumption the object's pixel position was at the center of Faster R-CNN's bounding box. Combining the DLT's position and the decoder's orientation estimate formed the final pose estimate.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Higher HRnet Framework", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Example of Panoramic Camera Footage", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Example of Blender Environment Blender 3.3 was used as a tool to render synthetic scenes for PATRIOT. Blender is a free", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Blender Rendered Camera vs Real-World Footage 10,000 synthetic images were generated: 8,000 for training and 2,000 for testing and validation.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Labeled Aircraft Skeleton", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Faster R-CNN and DLT on Real-World Data", "figure_data": "", "figure_id": "fig_7", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Encoder-Decoder Network Diagram", "figure_data": "", "figure_id": "fig_8", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": ".", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 11 :Figure 12 :1112Figure 11: Encoder-Decoder Angular Error", "figure_data": "", "figure_id": "fig_10", "figure_label": "1112", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13: Selected OpenPifPaf Results on Real-World DataThe training parameters used were: --lr=0.0002 -momentum=0.95 -b-scale=5.0 -epochs=1000lr-warm-up-epochs=100 -batch-size=9 -loader-", "figure_data": "", "figure_id": "fig_11", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 :Figure 15 :1415Figure 14: Selected Higher HRNet Results on Real-World Data The training parameters used were: loss: num_stages: 2, ae_loss_type: exp, with_ae_loss: [true, false], push_loss_factor: [0.001, 0.001], pull_loss_factor: [0.001, 0.001], with_heatmaps_loss: [true, true], heatmaps_loss_factor: [1.0, 1.0], begin_epoch: 0, checkpoint: '', end_epoch: 300, gamma1: 0.99, gamma2: 0.0, images_per_gpu: 12, lr: 0.001, lr_factor: 0.1, lr_step: [200, 260], momentum: 0.9, nesterov: false, optimizer: adam, resume: false, shuffle: true, wd: 0.0001 Evaluation Experiment", "figure_data": "", "figure_id": "fig_12", "figure_label": "1415", "figure_type": "figure" }, { "figure_caption": "Figure 16: Graph of Error for Each Framework for Each Aircraft", "figure_data": "", "figure_id": "fig_13", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 18 :18Figure 18: Top: Real-World Trained Model vs. Bottom: Synthetic + Real World Trained Model", "figure_data": "", "figure_id": "fig_14", "figure_label": "18", "figure_type": "figure" } ]
Ari Goodman; James Hing; Gurpreet Singh; Ryan O'shea
[ { "authors": "A Newell; K Yang; J Deng", "journal": "Springer", "ref_id": "b0", "title": "Stacked hourglass networks for human pose estimation", "year": "2016-10" }, { "authors": "J Redmon; S Divvala; R Girshick; A Farhadi", "journal": "", "ref_id": "b1", "title": "You only look once: Unified, real-time object detection", "year": "2016" }, { "authors": "S Ren; K He; R Girshick; J Sun", "journal": "Advances in neural information processing systems", "ref_id": "b2", "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "year": "2015" }, { "authors": "N Tajbakhsh; J Y Shin; S R Gurudu; R T Hurst; C B Kendall; M B Gotway; J Liang", "journal": "IEEE transactions on medical imaging", "ref_id": "b3", "title": "Convolutional neural networks for medical image analysis: Full training or fine tuning?", "year": "2016" }, { "authors": "H C Shin; H R Roth; M Gao; L Lu; Z Xu; I Nogues; . . Summers; R M ", "journal": "IEEE transactions on medical imaging", "ref_id": "b4", "title": "Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning", "year": "2016" }, { "authors": "L Pishchulin; E Insafutdinov; S Tang; B Andres; M Andriluka; P V Gehler; B Schiele", "journal": "", "ref_id": "b5", "title": "Deepcut: Joint subset partition and labeling for multi person pose estimation", "year": "2016" }, { "authors": "V Lepetit; F Moreno-Noguer; P Fua", "journal": "International journal of computer vision", "ref_id": "b6", "title": "Epnp: An accurate o (n) solution to the pnp problem", "year": "2009" }, { "authors": "K Sun; B Xiao; D Liu; J Wang", "journal": "", "ref_id": "b7", "title": "Deep high-resolution representation learning for human pose estimation", "year": "2019" }, { "authors": "B Cheng; B Xiao; J Wang; H Shi; T S Huang; L Zhang", "journal": "", "ref_id": "b8", "title": "Higherhrnet: Scale-aware representation learning for bottom-up human pose estimation", "year": "2020" }, { "authors": "S Kreiss; L Bertoni; A Alahi", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b9", "title": "Openpifpaf: Composite fields for semantic keypoint detection and spatio-temporal association", "year": "2021" }, { "authors": "V Lepetit; F Moreno-Noguer; P Fua", "journal": "International journal of computer vision", "ref_id": "b10", "title": "Epnp: An accurate o (n) solution to the pnp problem", "year": "2009" }, { "authors": "", "journal": "Blender Foundation", "ref_id": "b11", "title": "Blender [Computer Software", "year": "2022" }, { "authors": " Alexdark", "journal": "", "ref_id": "b12", "title": "Blend swapaircraft carrier USS Nimitz. Blend Swap | Aircraft Carrier USS Nimitz", "year": "2012-05-30" }, { "authors": " Kuhnindustries", "journal": "", "ref_id": "b13", "title": "Blend Swap | High Poly Hornet", "year": "2013-04-10" }, { "authors": "D P Kingma; J Ba", "journal": "", "ref_id": "b14", "title": "Adam: A method for stochastic optimization", "year": "2014" } ]
[]
2023-11-27
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b10", "b17", "b50", "b62", "b55", "b53", "b57", "b52", "b71", "b44" ], "table_ref": [], "text": "Automated video analysis has broad applications in computer vision research, benefiting diverse fields like self-driving cars, public safety monitoring, and sports analysis [11,16,18,43,51,63]. A principal challenge in this field is Temporal Action Localization (TAL) in untrimmed video streams, with the objective being to accu-rately pinpoint the start and end times of actions and to categorize them accordingly [56,57]. Recent advancements in fully-supervised TAL methods have shown promising improvements [54,58,68]. However, they depend on the detailed annotation of start and end timestamps, along with action labels for every action in training videos, which is both labor-intensive and expensive. To diminish the dependence on extensive labeling throughout the training stage, there has been a growing interest in the advancement of methodologies that operate under limited supervision [10,52,53,72]. Specifically, point-supervised TAL requires the annotation of only a single frame within the temporal window of each action instance in the input video [26,29,41,45,61]. Point-level supervision significantly lowers the annotation costs in comparison to full supervision, while providing essential information about the approximate locations and the total number of action instances.\nIn temporal action detection, pseudo-labels are primarily defined as estimated action boundaries (proposals) along with their corresponding action labels. A recent trend aimed at bridging the gap between point-supervised and fullysupervised TAL relies on self-training, wherein pseudolabels are generated by a base point-supervised model. These pseudo-labels act as substitute action annotations, enabling the training of models under limited supervision. Current techniques generate pseudo-labels by creating proposals based on thresholds applied to the predicted action classification probabilities. However, these methods have several shortcomings. Firstly, they are highly sensitive to the choice of threshold values; varying thresholds can lead to significant shifts in the alignment of proposals with ground-truth instances. Secondly, they often yield an excess of redundant and overlapping proposals, which are unsuitable as pseudo-labels. Ideally, there should be a one-toone correspondence between pseudo-labels and action instances. Lastly, these methods struggle to generate complete action proposals and are sensitive to inconsistencies in action classification scores.\nWe introduce an innovative approach to generate pseudolabels by modeling the distribution of action classification probabilities as a combination of Gaussian and uniform distributions. This methodology is based on the observation that certain action instances exhibit homogenous classification probabilities across snippets, resembling a uniform distribution. In contrast, for other actions, snippets near the action boundaries, which often include ambiguous or transitional movements, show lower classification probabilities, resembling a Gaussian distribution. This combination effectively captures the full spectrum of action instances. Our base point-supervised model predicts background snippets and action classification probabilities for each action class in the video. For each annotated action point, preliminary action boundaries are determined by identifying the nearest background timestamps before and after the annotated point. Then, a mixed distribution model is fitted to the action classification probabilities within these boundaries, minimizing the mean squared error (MSE) loss using Brent's method [3]. Consequently, high-quality pseudolabels are generated that overcome prior challenges: 1) eliminating reliance on arbitrary thresholding, 2) ensuring the creation of a single proposal for each action instance, and 3) maintaining robustness against fluctuations in action classification probabilities. Additionally, we propose learning action boundary snippets during the training of the main model by modeling the distribution of action scores. Although snippets near the action boundaries often have lower classification scores compared to more central action snippets, differentiating these boundary snippets from the background is essential. During training, we compare the predicted classification probabilities with the Gaussian kernels to reinforce the consistency of action scores across the entire range of actions, including boundaries. This process, supervised with our proposed loss functions, enhances the model's accuracy in estimating action durations and in generating complete proposals. Our contributions are summarized as follows:\n• We propose a novel strategy for pseudo-label generation in self-training, where the predicted action classification probabilities are modeled as a composite of Gaussian and uniform distributions. The effectiveness of the strategy is evidenced by the high-quality pseudo-labels it generates. • We propose a framework of learning action boundary snippets during the training of the main model to generate complete action proposals for testing. This process involves comparing the predicted action classification probabilities with a Gaussian kernel predicted by our model. Our designed loss functions supervise the learning of Gaussian parameters and the predicted probability signals.\n• Our ADM-Loc framework outperforms the state-ofthe-art point-supervised methods on THUMOS'14 and ActivityNet-v1.2 datasets." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b12", "b1", "b29", "b68", "b3", "b1", "b27", "b7", "b47", "b18", "b26", "b46", "b43", "b54", "b63", "b16", "b39", "b52", "b61" ], "table_ref": [], "text": "Fully-supervised TAL. Fully-supervised methods can be grouped into anchor-based and anchor-free. Anchor-based methods generate pre-defined action proposals distributed across temporal locations [9, 13,14]. They extract fixedsize features from the proposals to evaluate their quality.\nAnchor-free methods generate proposals with flexible duration by predicting actionness and action offset for each snippet. [2,30,31,33,34]. Temporal feature pyramid is introduced to model actions of varying duration [32,37,38,69].\nModeling temporal dependencies in videos has been addressed by recurrent neural networks [4,5], graph convolutions [2,28,60,65,70], and transformers [8,48,68]. Unlike these methods that require detailed frame-level annotations, our framework relies solely on point-level annotations. We employ a multi-scale transformer architecture to model the temporal dependencies of video snippets and to handle actions of varying durations.\nWeakly-supervised TAL. The methods often require only the video-level labels of actions for training, while the temporal boundaries of actions are not needed. Majority of the weakly-supervised methods rely on the Multi-Instance Learning (MIL) to learn actionness and classification scores to detect discriminative action regions and eliminate background snippets [19,27,39,46,47,50]. To generate complete action proposals, some methods have proposed adversarial complementary learning approaches to discover different parts of actions by increasing the weight of less discriminative parts of the video [35,44,55,64,71]. Another category of methods rely on self-training scheme to generate pseudo-labels on the train set from an initial base model. The pseudo-labels provide additional supervision for the main model to improve the training [17,40,49,53,62,66]. These methods often fail to generate high-quality pseudolabels. In contrast, our model, employing slightly more annotations, produces pseudo-labels that are significantly better aligned with the ground-truth action instances.\nPoint-supervised TAL. Point-level supervision significantly reduces the cost of annotation by labeling a single point for each action instance. SF-Net [41] proposed to expanded each annotated single frame to its nearby frames to mine pseudo action frames and utilized the unannotated frames to mine pseudo background frames. PTAL [23] performed boundary regression based on keyframe prediction. Back-TAL [61] introduced background-click supervision by annotating a random frame from a series of consecutive background frames. Lee et al. [26] developed an actionbackground contrast method to capture action completeness. We propose a novel approach for generating highquality pseudo-labels using a base point-supervised model. These pseudo-labels then guide our main model in learning action continuity and in generating complete action proposals during testing. " }, { "figure_ref": [], "heading": "Our Proposed Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Point-Supervised Formulation", "publication_ref": [], "table_ref": [], "text": "Given an input video, a single annotated point with the action category is provided for each action instance, denoted by {t i , y i } Nact i=1 . The i-th action instance is annotated at the t i -th snippet with its action label y i , and N act is the total number of action instances in the input video. The label y i is a binary vector with y i [c] = 1 if the i-th action instance belongs to class c and otherwise 0 for C action classes." }, { "figure_ref": [], "heading": "Backbone Architecture", "publication_ref": [], "table_ref": [], "text": "A multi-scale temporal transformer is employed as the backbone architecture. Given an input video, snippet-level visual features are extracted with a pre-trained visual encoder (I3D [7]) and concatenated to generate a video feature sequence X ∈ R T ×D , where T is the number of snippets and D is the feature dimensionality. Each snippet feature is embedded using a shallow temporal convolutional network resulting in feature sequence Z 0 ∈ R T ×D . This feature sequence is the input to the transformer network to model the temporal dependencies using local self-attention [68]. To represent actions with different duration, a temporal feature pyramid is constructed by down-sampling transformer blocks using a strided depthwise 1D convolution. The feature pyramid is denoted by\nZ = {Z 1 , Z 2 , • • • , Z L } where Z l ∈ R T l ×D\nis the output of level l. Also, T l = T /θ l , and θ is the down-sampling ratio. Feature pyramid captures multi-scale temporal information, enabling the model to capture both short-term and long-term temporal dependencies, leading to a more comprehensive representation of action dynamics. A shallow 1D convolutional network is attached to each pyramid level with its parameters shared across all levels. A sigmoid function is attached to each output dimension to predict the probability of actions and background. The output of the l-th level of the feature pyramid is a probability sequence, denoted by P l ∈ R T l ×C+1 , where T l is the temporal dimension on the l-th level. Additionally, P l [t, C + 1] is the probability of background at time t on level l. The complement of the background probability is the class-agnostic score. The class-specific and class-agnostic scores are fused to derive the final probability sequence Pl ∈ R T l ×C+1 .\nPl [t, c] = P l [t, c](1 -P l [t, C + 1]).\n(1)" }, { "figure_ref": [], "heading": "Point-supervised Base Model", "publication_ref": [], "table_ref": [], "text": "Augmented annotations. We augment the point-level annotations for improved training by defining a vicinity around each annotated point with a hyper-parameter radius r a . Specifically, for the i-th action instance containing the annotated point t i and its corresponding label y i , the label y i is assigned to all snippets within radius r a . The augmented annotation set is denoted by Φ.\nΦ = {([t i -r a , t i + r a ], y i )} Nact i=1 .(2)\nThe augmented annotation set on level l is defined as the following where θ represents the down-sampling ratio. The notation is simplified on the second line. N l is the number of labeled points on level l after augmentation.\nΦ l = {([(t i /θ l ) -r a , (t i /θ l ) + r a ], y i )} Nact i=1 = {(t j , y j )} N l j=1 .\n(3)\nVideo-level action prediction. The video-level score for class c is defined as the average of action probabilities for class c over the top-k temporal positions on each level l of the pyramid, denoted by Pl [c]. The Multiple Instance Learning (MIL) loss [12] is utilized to supervise the predictions. The video-level label is denoted by y.\nL MIL = - 1 L L l=1 C c=1 y[c] log( Pl [c])+(1-y[c]) log(1-Pl [c]).(4)\nSnippet-level action prediction. The snippet-level focal loss is employed to optimize the probability signal Pl for each level l of the pyramid. γ is the focusing parameter (set to 2) and N ⋆ act is the number of positive instances.\nL Act = - 1 N ⋆ act L l=1 Nl j=1 C c=1 y j [c] log( Pl [t j , c])(1 -Pl [t j , c]) γ -(1 -y j [c]) log(1 -Pl [t j , c]) Pl [t j , c] γ .\n(\n)5\nBackground prediction. To distinguish actions from the background, we select the temporal positions not belonging to any of the augmented annotated points and possessing a background probability exceeding a certain threshold on each level l of the pyramid. The background points on level l are denoted by {b j } M l j=1 with p l (b j ) as the probability of background at time b j . The background loss is employed to optimize the probability signals Pl for all levels. M bg is the total number of background points.\nL BG = - 1 M bg L l=1 M l j=1 C c=1 ( Pl [b j , c]) γ log(1 -Pl [b j , c]) + (1 -p l (b j )) γ log p l (b j ).(6)\nJoint training. The total loss for the base model is a weighted combination of the three aforementioned losses where λ ⋆ terms are determined through empirical analysis.\nL Total = λ MIL L MIL + λ Act L Act + λ BG L BG .(7)\n3.4. Actionness Distribution Modeling (ADM)" }, { "figure_ref": [], "heading": "Pseudo-label Generation with ADM", "publication_ref": [], "table_ref": [], "text": "Our proposed pseudo-labels generation method on the training set models the distribution of action classification probabilities predicted by the base model. This distribution is represented as a combination of Gaussian and uniform distributions. The rationale behind this modeling is that certain action instances exhibit uniform classification probabilities across snippets, resembling a uniform distribution. Conversely, actions with ambiguous boundaries or transitional movements tend to have lower classification probabilities near the boundaries, indicative of a Gaussian distribution. This combination of distributions captures the full spectrum of action instances.\nAfter training the base model, action classification probabilities are extracted from the final level of the multi-scale transformer, denoted by PL ∈ R T L ×C+1 . This choice is made because the larger receptive field at the last feature pyramid level exhibits fewer fluctuations in action probabilities across neighboring snippets, making it more suitable for our modeling purposes. A Gaussian filter is also applied to smooth the signal and reduce the impact of minor inconsistencies in action classification probabilities. The resolution of the last-level probability signal is upgraded to match that of the first level, resulting in signal PL ∈ R T ×(C+1) .\nThe background points are predicted from the first level of the pyramid because the lower resolution of the first level excels at detecting fine-grained information. The annotated action points and the predicted background points are denoted by {(t i , y i )} Nact i=1 , and {b j } Nbkg j=1 , respectively. For each annotated action point (t i , y i ), we determine preliminary action boundaries by identifying the nearest background points immediately preceding and succeeding the annotated point, denoted by\nβ i = [b s i , b e i ].\nIf point t i belongs to action class c (i.e., y i [c] = 1), the objective is to estimate the boundaries of the i-th action instance using signal PL [t, c] within the interval β i . Within interval β i and within distance δd i from the annotated point t i , we locate the snippet t ⋆ i with the peak probability of class c. Here, d i is the duration of β i and δ is a hyper-parameter.\nt ⋆ i = argmax t ( PL [t, c]) for t ∈ (β i ∩ [t i -δd i , t i + δd i ]).\n(8) The intuition behind selecting the peak point t ⋆ i is that this point is the most representative snippet of class c in the vicinity of point t i . The point t ⋆ i is treated as the mean of the uniform and the Gaussian distributions. For the i-th action instance, the signal PL [t, c] is set to zero outside the interval β i . We fit a Gaussian distribution centered at t ⋆ i to PL [t, c] for each action instance. Gaussian distribution is defined as follows where t, µ, and σ represent the temporal axis, mean, and standard deviation.\nG(t, µ, σ) = 1 σ √ 2π e -1 2 ( t-µ σ ) 2(9)\nThe Gaussian distribution can be uniquely defined for the i-th action instance as G(t, t ⋆ i , σ i ) by estimating the standard deviation σ i . An upper bound u b and a lower bound l b are estimated for σ i with respect to boundaries of β i .\nu b = max(t ⋆ i -b s i , b e i -t ⋆ i ), l b = 10 -6 .(10)\nThus, the objective is to find the optimal σ i within range\n[l b , u b ] to fit Gaussian distribution G(t, t ⋆ i , σ i ) to probability signal PL [t, c].\nWe address this optimization problem by minimizing the following MSE loss using Brent's method with the bounded variant [3].\nL G-fit MSE = t∈βi α • G(t, t ⋆ i , σ i ) -PL [t, c] 2 . (11\n)\nα is a scale factor equal to PL [t ⋆ i , c]/G(t ⋆ i , t ⋆ i , σ i ). Brent's method [3] is a root-finding algorithm that iteratively adjusts the sigma σ i within specified bounds l b and u b to find an optimal standard deviation for the Gaussian component. The same process is applied to find an ideal width ω i for the uniform component.\nL U-fit MSE = t∈βi U (t ⋆ i , ω i ) -PL [t, c] 2 . (12\n)\nThe linear combination of parameters σ i and ω i defines the final interval duration ∆ i for the i-th action where ∆ i = γ 1 σ i + γ 2 ω i . The duration ∆ i defines the estimated interval\nI i = [t ⋆ i -∆ i , t ⋆ i + ∆ i ].\nFor each video, the pseudo-labels set includes the annotated point t i , the predicted sigma σ i , the estimated interval I i , and the label y i , as below:\nΨ = {(t i , σ i , I i , y i )} Nact i=1 where I i = [t ⋆ i -∆ i , t ⋆ i + ∆ i ].(13)" }, { "figure_ref": [], "heading": "The Main Model: ADM-Loc", "publication_ref": [ "b26", "b16" ], "table_ref": [], "text": "The backbone of the main model is a multi-scale transformer (described in 3.2). The model is supervised with the pseudo-labels set Ψ = {(t i , σ i , I i , y i )} Nact i=1 generated by actionness distribution modeling in eq. 13. The main model is trained with the losses in eq. 7 as well as two additional losses introduced in this section.\nLearning boundary snippets. The L Act loss (eq. 5) supervises the learning of probability signal Pl for the i-th action instance only within interval I i which is merely an estimation of the the action boundaries. It is probable that the interval I i fails to encompass snippets near the action boundaries, which are often ambiguous and include transitional movements. Nevertheless, the model needs to classify these boundary snippets as part of the action to generate complete action proposals during testing. Although the action probabilities at these boundary snippets might be lower compared to the more representative action snippets, it remains essential for the model to differentiate these boundary snippets from the background. We impose this by comparing the probability signal Pl with a Gaussian kernel, reinforcing the consistency of action classification probabilities for the entire duration of action. The probability signal predicted by the first level of the feature pyramid, denoted as P1 ∈ R T1×C+1 , exhibits the highest variability in action probability predictions due to its small receptive field. As a result, this signal particularly benefits from being compared against a Gaussian kernel to stabilize these fluctuations.\nStandard deviation prediction. The extracted feature sequence from the first level of the pyramid is denoted by Z 1 ∈ R T1×D . For the i-th action instance, K features are sampled from Z 1 within pseudo-label interval I i and fed to a regression head to predict the standard deviation σi . The regression head consists of temporal convolutions, layer normalization, and the sigmoid function to predict the value of σi between [0, 1]. We use the values of {σ i } Nact i=1 from the pseudo-labels to determine parameter K and rescale the predicted σi . This prediction is supervised using an MSE loss, which measures the discrepancy between the predicted and pseudo-label standard deviations.\nL σ MSE = 1 N act Nact i=1 (σ i -σi ) 2 . (14\n)\nGaussian imposition. The set S c denotes the set of action classes that occur in a given video. A Gaussian kernel is defined to represent the i-th action instance formulated as follows where t i is the annotated point and σi is the predicted standard deviation.\nG i (t, t i , σi ) = e -1 2 t-t i σi 2 . (15\n)\nFor each action class c ∈ S c , we mix the Gaussian kernels of all action instances belonging to class c, as follows. \nG c (t) = max {G i (t, t i , σi )| i ∈ [1, N act ], y i [c] = 1} .\n(16) The alignment between the probability signal P1 [t, c] and the Gaussian kernel G c (t) is supervised using the following MSE loss.\nL G MSE = 1 T 1 |S c | c∈Sc T1 t=1 G c (t) -P1 [t, c] 2 . (17\n)\nPseudo-label sampling. We incorporate a pseudo-label sampling strategy during the training process for L Act loss by selecting the snippets around the annotated points within a radius hyper-parameter r s and inside the boundaries of pseudo-labels. The motivation for this sampling is to reduce the likelihood of training the model on false positives. During the pseudo-label generation, the background frames that are erroneously classified as actions constitute the false positives. These are more likely to occur at the boundaries of the pseudo-labels.\nJoint training. The total loss for the main model is a weighted combination of the following losses where λ ⋆ are determined through empirical analysis.\nL Total =λ MIL L MIL + λ Act L Act + λ BG L BG + λ G L G MSE + λ σ L σ MSE . (18\n)\nInference. The action categories are identified using the video-level scores. The action proposals are predicted from all pyramid levels by applying thresholds to the snippetlevel action scores Pl for each level l for the predicted classes and merging consecutive candidate segments. Each proposal is assigned a confidence score based on its outerinner-contrast score [27]. Finally, the non-maximum suppression (NMS) is used to eliminate overlapping proposals. The average number of action instances per video is 15.5. ActivityNet-v1.2 is a large-scale dataset containing 9, 682 videos that includes 100 complex everyday activities. The average number of action instances per video is 1.5. Con-sistent with previous work, our model is trained using the training set and evaluated using the validation set [17,26]." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b5", "b4", "b35", "b19" ], "table_ref": [], "text": "Evaluation metric. The Mean Average Precision (mAP) under different Intersection over Union (IoU) thresholds is utilized as the evaluation metric, wherein the Average Precision (AP) is computed for each action class. On ActivityNet-v1.2 [6], IoU thresholds range from 0.5 to 0.95 in increments of 0.05. As for THUMOS14 [22], they range from 0.1 to 0.7 in increments of 0.1.\nImplementation details. For feature extraction, we use two-stream I3D [7] on both datasets. We fed 16 consecutive frames as the input to the visual encoder, using a sliding window with stride 4 on THUMOS14 and stride 16 on ActivityNet-v1.2. Our multi-scale transformer model is trained with Adam [25] and linear warm-up [36] with the learning rate of 10 -4 . Model EMA [20] is implemented to further stabilize the training. The number of epochs and warm-up epochs are set to 100 and 10 on THUMOS14, and 50 and 5 on ActivityNet-v1.2. The batch sizes are set to 3 on THUMOS14, and 64 on ActivityNet-v1.2. The input length is set to 2, 304 for THUMOS14 and to 192 for ActivityNet-v1.2, using padding, random sampling and linear interpolation. To employ local self-attention, the window lengths are set to 19 and 7 on THUMOS14 and ActivityNet-v1.2, respectively. The number of pyramid levels is set to L = 4 and the down sampling ratio θ is set to 2. The annotation augmentation radius r a and the pseudo-label sampling radius r s are set to 2. The parameters r a and r s are defined on the feature grid, representing the distance in terms of the number of features. At inference, the full sequence is fed into the model without sampling. 1 " }, { "figure_ref": [], "heading": "Comparison with State-of-the-art Methods", "publication_ref": [], "table_ref": [], "text": "Table 1 shows a detailed comparison with the leading methods on THUMOS'14 and ActivityNet-v1.2 datasets.\nResults on THUMOS'14: Our model significantly outperforms other point-supervised methods, achieving an average mAP improvement of 7.4%. Notably, this enhancement is nearly 10% at the most stringent IoU threshold of 0.7. Moreover, our model shows a significant gain of 10.6% average mAP increase over weakly-supervised methods, despite using only slightly more annotations. The mAP at the 0.7 IoU threshold is almost double that of its weaklysupervised counterparts.\nResults on ActivityNet-v1.2. Our model outperforms the state-of-the-art weakly and point-supervised methods in terms of mAP, consistently across all the IoU thresholds." }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [], "table_ref": [], "text": "Quality of pseudo-labels. In Table 2, α represents the ratio of the number of generated proposals to the ground-truth instances in the training set (validation set of THUMOS'14). 1 The source code will be released upon acceptance of the paper.\nThe first row shows the quality of the generated action proposals on the training set using the base model (section 3.3). As shown in the table, the average mAP of these proposals is only 63.8%, and the number of predicted proposals is 12 times the number of ground-truth instances (α = 12). This indicates that a large number of proposals are redundant and overlapping, making them unsuitable to be used as pseudolabels. Ideally, there should be a one-to-one correspondence between the pseudo-labels and action instances. The second row demonstrates the quality of pseudo-labels generated using our proposed Actionness Distribution Modeling (ADM), as detailed in section 3.4. Noticeably, the mAP at the highest IoU of 0.7 is almost doubled compared to the base proposals. Furthermore, ADM generates exactly one proposal (pseudo-label) for each annotated point (α = 1). Table 2. Analysis of the generated pseudo-labels on the validation set of THUMOS'14. α represents the ratio of the number of generated proposals to the ground-truth instances.\nImpact of pseudo-labels. Table 3 shows the impact of supervision in the base model. The first row shows supervision with only points without augmentation (r a = 0), achieving the lowest results. We also compare the performance of the base model when supervised with the augmented points (r a = 2) versus the sampled pseudo-labels (r s = 2). Note that the radius (for both r s and r a ) on level l of the pyramid is r * • θ l = 2 l+1 , for θ = 2 and r * = 2. Therefore, the radius can be as large as 32 for level l = 4. The model trained with augmented points selects all snippets within the radius as positive samples, even if the action duration is much shorter than the radius. In contrast, the pseudo-labels effectively limit the positive samples to estimated action boundaries. This results in a performance gain of 5.7% average mAP. Table 3. Impact of the generated pseudo-labels in the base model on THUMOS'14. ra is the annotation augmentation radius, and rs is the pseudo-label sampling radius.\nImpact of the backbone network. Table 4 demonstrates the impact of the backbone multi-scale transformer architecture in ADM-Loc (the main model). Parameter l denotes the number of feature pyramid levels. The pseudolabel sampling radius r s is set to 2. As shown in the table, the highest average mAP is achieved when l = 4, which is comparable to the results for l = 5." }, { "figure_ref": [], "heading": "Levels", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "l = 1 l = 2 l = 3 l = 4 l = 5 mAP@0.7 24.3 29.9 28.5 31.3 30.5 mAP-AVG 56.0 59.6 59.5 60.2 60.4\nTable 4. Impact of the number of pyramid levels (denoted by l) in ADM-Loc backbone on THUMOS'14.\nImpact of pseudo-label sampling. Table 5 demonstrates the impact of the pseudo-label sampling strategy with different sampling radius r s on ADM-Loc (the main model). r s = ∞ indicates no sampling. As indicated in the table, using a sampling radius of r s = 2 results in a 3.2% improvement in average mAP compared to the scenario with no sampling (r s = ∞). This is because pseudolabel sampling decreases the chance of training the model on false positives with the L Act loss (see eq. 5). During the pseudo-label generation, the background frames that are erroneously classified as actions constitute the false positives. These are more likely to occur at the boundaries of the pseudo-labels. Impact of the proposed losses. " }, { "figure_ref": [], "heading": "Radius (r", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We " }, { "figure_ref": [], "heading": "Acknowledgment", "publication_ref": [], "table_ref": [], "text": "This material is based upon work supported by the National Science Foundation under award number 2041307." }, { "figure_ref": [], "heading": "Appendix", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_7", "fig_5" ], "heading": "Temporal Action Detection Error Analysis", "publication_ref": [ "b0", "b0" ], "table_ref": [], "text": "To assess the effectiveness and limitations of our ADM-Loc framework, we employ DETAD [1] for analyzing false negatives (Figure 3) and false positives (Figure 4). (fully-supervised), our ADM-Loc (point-supervised) and our base (point-supervised) on THUMOS14 using DETAD [1]." }, { "figure_ref": [], "heading": "False Negative Analysis", "publication_ref": [], "table_ref": [], "text": "the base model (part c) focusing on the top 1G scoring predictions. This comparison reveals that ADM-Loc identifies more true positive instances and exhibits fewer localization and confusion errors. This confirms the effectiveness of ADM-Loc in predicting more precise action boundaries." }, { "figure_ref": [], "heading": "Distribution of Annotated Points", "publication_ref": [], "table_ref": [], "text": "In the point-supervision setting, only a single frame per action instance is annotated in the training set. SF-Net [41] proposed to simulate point annotations by sampling a single frame for each action instance. The Uniform distribution method randomly selects a frame within the action boundaries of each action, while the Gaussian distribution method does so with respect to a given mean and standard deviation. Typically, the Gaussian distribution is more likely to sample frames closer to the central timestamps of actions, thereby increasing the chances of choosing a more discriminative snippet. In contrast, the Uniform distribution can sample frames from any part of the action, without this central bias. " }, { "figure_ref": [ "fig_12" ], "heading": "More Qualitative Results", "publication_ref": [], "table_ref": [], "text": "Qualitative results depicted in Figure 5 illustrate various types of errors, including over-completeness, incompleteness, and misalignment, generated by the base model. These issues have been addressed in ADM-Loc. " } ]
This paper addresses the challenge of point-supervised temporal action detection, in which only one frame per action instance is annotated in the training set. Self-training aims to provide supplementary supervision for the training process by generating pseudo-labels (action proposals) from a base model. However, most current methods generate action proposals by applying manually designed thresholds to action classification probabilities and treating adjacent snippets as independent entities. As a result, these methods struggle to generate complete action proposals, exhibit sensitivity to fluctuations in action classification scores, and generate redundant and overlapping action proposals. This paper proposes a novel framework termed ADM-Loc, which stands for Actionness Distribution Modeling for point-supervised action Localization. ADM-Loc generates action proposals by fitting a composite distribution, comprising both Gaussian and uniform distributions, to the action classification signals. This fitting process is tailored to each action class present in the video and is applied separately for each action instance, ensuring the distinctiveness of their distributions. ADM-Loc significantly enhances the alignment between the generated action proposals and ground-truth action instances and offers high-quality pseudo-labels for self-training. Moreover, to model action boundary snippets, it enforces consistency in action classification scores during training by employing Gaussian kernels, supervised with the proposed loss functions. ADM-Loc outperforms the state-of-the-art pointsupervised methods on THUMOS'14 and ActivityNet-v1.2 datasets.
ADM-Loc: Actionness Distribution Modeling for Point-supervised Temporal Action Localization
[ { "figure_caption": "Figure 1 .1Figure 1. Framework overview. (a) The base model is a multi-scale transformer supervised with point-level labels and LMIL, LAct, and LBG losses. (b) The probability signals predicted by the base model are given to our Actionness Distribution Modeling (ADM) for pseudo-label generation. A composite distribution, comprising both Gaussian and uniform distributions, is fitted to the predicted probability signals and optimized by Brent's method [3] for pseudo-label generation. (c) ADM-Loc is a multi-scale transformer supervised with our generated pseudo-labels and optimized by LMIL, LAct, LBG, L σ MSE and L G MSE losses. ADM-Loc learns action boundary snippets by comparing the predicted action classification probabilities with a predicted Gaussian kernel, supervised by our proposed loss functions.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 11Fig. 1 provides an overview of our framework. Our framework adopts a self-training strategy that incorporates a base model and a main model, each employing a multiscale transformer as their backbone architecture. The base model's objective is to predict action probability signals and background points, Fig. 1(a). The predicted probability signals are employed to generate high-quality pseudolabels, providing additional supervision for the training of the main model (ADM-Loc), Fig. 1(b). ADM-Loc learns action boundary snippets by comparing the predicted probabilities with a predicted Gaussian kernel supervised by our proposed loss functions, L σ MSE and L G MSE , Fig. 1(c).", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "4. 1 .1Experimental Setting Datasets. THUMOS14 [22] comprises untrimmed videos across 20 unique categories. In line with prior work [17, 26], we use the 200 videos in the validation set for training and the 213 videos in the testing set for evaluation.", "figure_data": "", "figure_id": "fig_2", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Qualitative results on THUMOS'14 for three different action classes. The ground-truth instances are highlighted in red. The detection results are displayed from: (1) the base model supervised with point-level annotations (green), (2) the base model supervised with our generated pseudo-labels (brown), (3) our ADM-Loc framework (blue).", "figure_data": "", "figure_id": "fig_4", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "4. 4 .4Fig.2presents the qualitative results of our model in different stages: (1) the base model supervised with pointlevel annotations (section 3.3), (2) the base model supervised with pseudo-labels generated by ADM (section 3.4.1), and (3) our full ADM-Loc framework (section 3.4.2). This figure demonstrates that ADM-Loc partially addresses misalignments between actual instances and proposals in the base model, such as the incomplete localization of the action 'Baseball Pitch' (part a), and the over-complete localization of the action 'Shotput' (part b). Furthermore, in some cases, the base model supervised with pseudo-labels generates over-complete proposals (such as the last action instance of 'CliffDiving' in part c), which are adjusted in ADM-Loc by modeling action boundary snippets.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 3 Figure 434Figure3illustrates the false negative (FN) profiling across various coverages, lengths, and number of instances. Part (b) of Figure3displays the FN profiling specific to ADM-Loc. The figure reveals that higher false negative rates are associated with action instances characterized by: (1) extremely short or long durations relative to the video length (Coverage (XS) or Coverage (XL)), (2) actions of very short or very long lengths (Length (XS) or Length (XL)), and (3) videos containing a large number of action instances (#Instances (L)). Furthermore, Figure3demonstrates that ADM-Loc (part b) reduces the false negative (FN) rate compared to the base model (part c), except in two cases: Coverage (L) and Length (XL). This is because the base model samples all snippets within the sampling radius for point augmentations, whereas ADM-Loc only samples snippets that fall within the pseudo-label boundaries. To examine the limitations of ADM-Loc relative to fully-supervised methods, FN profiling of ActionFormer[68] is provided in Figure3(part a). The most significant FN differences between Ac-tionFormer (part a) and ADM-Loc (part b) are the following cases: Length (XS), #Instances (L). This demonstrates that the annotation of action boundaries is crucial for detecting very short action instances and for accurate detection in videos containing numerous instances.", "figure_data": "", "figure_id": "fig_6", "figure_label": "34", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. False negative (FN) profiling of ActionFormer [68] (fully-supervised), our ADM-Loc (point-supervised) and our base (point-supervised) on THUMOS14 using DETAD [1].", "figure_data": "", "figure_id": "fig_7", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Our model (Point-supervised)", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. False positive (FP) profiling of ActionFormer [68] (fully-supervised), our ADM-Loc (point-supervised) and our base model (point-supervised) on THUMOS14 using DETAD [1].", "figure_data": "", "figure_id": "fig_9", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "(a) Action \"Billiards\". (b) Action \"Clean and jerk\".", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "(c) Action \"BaseballPitch\".", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Qualitative results on THUMOS'14. The ground-truth instances are highlighted in red. The detection results are displayed from: (1) the base model supervised with point-level annotations (green), (2) the base model supervised with our generated pseudo-labels (brown), (3) our ADM-Loc framework (blue). Transparent frames represent background frames.", "figure_data": "", "figure_id": "fig_12", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Impact of the pseudo-label sampling radius rs in ADM-Loc on THUMOS'14 where rs = ∞ means no sampling.", "figure_data": "s )mAP@IoU (%) 0.3 0.5 0.7mAP-AVG (0.1:0.7)1 2 4 ∞70.7 55.1 28.4 71.5 56.0 31.3 70.8 54.8 30.1 68.1 51.9 28.259.1 60.2 59.8 57.0", "figure_id": "tab_2", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Loc. All experiments in this table are also supervised with L MIL , L Act , L BG (eq. 4, 6, 5) losses. The pseudo-label sampling radius r s is set to 2. As indicated in the table, the implementation of both losses has led to a performance gain of 1.3% in average mAP and 2.3% in mAP at tIoU= 0.7.", "figure_data": "demonstrates MSE (eq. 14) and L G the impact of the proposed losses L σ MSE (eq. 17) in ADM-Proposed Losses mAP@IoU (%) mAP-AVGL σ MSE ✗ ✗L G MSE ✗ ✓0.3 69.7 55.0 29.0 0.5 0.7 70.6 55.7 30.0(0.1:0.7) 58.9 59.8✓✓71.5 56.0 31.360.2", "figure_id": "tab_3", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Impact of the proposed losses in ADM-Loc on THU-MOS'14.", "figure_data": "", "figure_id": "tab_4", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Table7demonstrates that ADM-Loc attains state-of-the-art results with both Uniform and Gaussian point-level distributions on THUMOS'14, indicating its robustness. However, it is observed that the ADM-Loc's performance is lower with the Uniform distribution as compared to the Gaussian distribution. We conjecture this may be attributed to the Uniform distribution's tendency to se-", "figure_data": "Background ErrLocalization ErrDouble Detection ErrConfusion ErrWrong Label ErrTrue Positive1001G2G3G4G5G6G7G8G9G10G", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "lect less discriminative snippets for point annotation, which can occur anywhere within the action's extent, such as at the boundaries Performance comparison of ADM-Loc with Uniform and Gaussian point-level distributions on THUMOS'14.", "figure_data": "DistributionMethodmAP@IoU (%) 0.3 0.5 0.7mAP-AVG (%) (0.1:0.7)ADM-Loc71.5 56.0 31.360.2GaussianBase Model 65.6 45.9 20.1 LACP[26] 64.6 45.3 21.8 Ju et al. [23] 58.2 35.9 12.8 SF-Net [41] 47.4 26.2 9.153.2 52.8 44.8 36.7ADM-Loc66.9 45.6 22.353.2UniformBase Model 63.2 39.9 14.2 LACP [26] 60.4 42.6 20.2 Ju et al. [23] 55.6 32.3 12.3 SF-Net [41] 52.0 30.2 11.849.5 49.3 42.9 40.5", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" } ]
Elahe Vahdani; Yingli Tian
[ { "authors": "Humam Alwassel; Fabian Caba Heilbron; Victor Escorcia; Bernard Ghanem", "journal": "", "ref_id": "b0", "title": "Diagnosing error in temporal action detectors", "year": "2018" }, { "authors": "Yueran Bai; Yingying Wang; Yunhai Tong; Yang Yang; Qiyue Liu; Junhui Liu", "journal": "Springer", "ref_id": "b1", "title": "Boundary content graph neural network for temporal action proposal generation", "year": "2020" }, { "authors": "P Richard; Brent", "journal": "Courier Corporation", "ref_id": "b2", "title": "Algorithms for minimization without derivatives", "year": "2013" }, { "authors": "Shyamal Buch; Victor Escorcia; Chuanqi Shen; Bernard Ghanem; Juan Carlos Niebles", "journal": "", "ref_id": "b3", "title": "Sst: Single-stream temporal action proposals", "year": "2017" }, { "authors": "Shyamal Buch; Victor Escorcia; Bernard Ghanem; Li Fei-Fei; Juan Carlos Niebles", "journal": "", "ref_id": "b4", "title": "End-to-end, single-stream temporal action detection in untrimmed videos", "year": "2019" }, { "authors": "Fabian Caba Heilbron; Victor Escorcia; Bernard Ghanem; Juan Carlos Niebles", "journal": "", "ref_id": "b5", "title": "Activitynet: A large-scale video benchmark for human activity understanding", "year": "2015" }, { "authors": "Joao Carreira; Andrew Zisserman", "journal": "", "ref_id": "b6", "title": "Quo vadis, action recognition? a new model and the kinetics dataset", "year": "2017" }, { "authors": "Shuning Chang; Pichao Wang; Fan Wang; Hao Li; Zheng Shou", "journal": "", "ref_id": "b7", "title": "Augmented transformer with adaptive graph for temporal action proposal generation", "year": "2022" }, { "authors": "Yu-Wei Chao; Sudheendra Vijayanarasimhan; Bryan Seybold; David A Ross; Jia Deng; Rahul Sukthankar", "journal": "", "ref_id": "b8", "title": "Rethinking the faster r-cnn architecture for temporal action localization", "year": "2018" }, { "authors": "Mengyuan Chen; Junyu Gao; Shicai Yang; Changsheng Xu", "journal": "", "ref_id": "b9", "title": "Dual-evidential learning for weakly-supervised temporal action localization", "year": null }, { "authors": "Anthony Cioppa; Adrien Deliege; Silvio Giancola; Bernard Ghanem; Marc Van Droogenbroeck; Rikke Gade; Thomas B Moeslund", "journal": "", "ref_id": "b10", "title": "A context-aware loss function for action spotting in soccer videos", "year": "2020" }, { "authors": "Richard H Thomas G Dietterich; Tomás Lathrop; Lozano-Pérez", "journal": "Artificial intelligence", "ref_id": "b11", "title": "Solving the multiple instance problem with axis-parallel rectangles", "year": "1997" }, { "authors": "Jiyang Gao; Zhenheng Yang; Kan Chen; Chen Sun; Ram Nevatia", "journal": "", "ref_id": "b12", "title": "Turn tap: Temporal unit regression network for temporal action proposals", "year": "2017" }, { "authors": "Jiyang Gao; Zhenheng Yang; Ram Nevatia", "journal": "", "ref_id": "b13", "title": "Cascaded boundary regression for temporal action detection", "year": "2017" }, { "authors": "Junyu Gao; Mengyuan Chen; Changsheng Xu", "journal": "", "ref_id": "b14", "title": "Finegrained temporal contrastive learning for weakly-supervised temporal action localization", "year": "2022" }, { "authors": "Silvio Giancola; Mohieddine Amine; Tarek Dghaily; Bernard Ghanem", "journal": "", "ref_id": "b15", "title": "Soccernet: A scalable dataset for action spotting in soccer videos", "year": "2018" }, { "authors": "Bo He; Xitong Yang; Le Kang; Zhiyu Cheng; Xin Zhou; Abhinav Shrivastava", "journal": "", "ref_id": "b16", "title": "Asm-loc: Action-aware segment modeling for weakly-supervised temporal action localization", "year": "2022" }, { "authors": "Chengkun He; Jie Shao; Jiayu Sun", "journal": "Multimedia Tools and Applications", "ref_id": "b17", "title": "An anomalyintroduced learning method for abnormal event detection", "year": "2018" }, { "authors": "Fa-Ting Hong; Jia-Chang Feng; Dan Xu; Ying Shan; Wei-Shi Zheng", "journal": "", "ref_id": "b18", "title": "Cross-modal consensus network for weakly supervised temporal action localization", "year": "2021" }, { "authors": "Gao Huang; Yixuan Li; Geoff Pleiss; Zhuang Liu; John E Hopcroft; Kilian Q Weinberger", "journal": "", "ref_id": "b19", "title": "Snapshot ensembles: Train 1, get m for free", "year": "2017" }, { "authors": "Linjiang Huang; Liang Wang; Hongsheng Li", "journal": "", "ref_id": "b20", "title": "Weakly supervised temporal action localization via representative snippet knowledge propagation", "year": "2022" }, { "authors": "Yu-Gang Jiang; Jingen Liu; Roshan Zamir; George Toderici; Ivan Laptev; Mubarak Shah; Rahul Sukthankar", "journal": "", "ref_id": "b21", "title": "Thumos challenge: Action recognition with a large number of classes", "year": "2014" }, { "authors": "Chen Ju; Peisen Zhao; Ya Zhang; Yanfeng Wang; Qi Tian", "journal": "", "ref_id": "b22", "title": "Point-level temporal action localization: Bridging fully-supervised proposals to weakly-supervised losses", "year": "2020" }, { "authors": "Chen Ju; Peisen Zhao; Siheng Chen; Ya Zhang; Yanfeng Wang; Qi Tian", "journal": "", "ref_id": "b23", "title": "Divide and conquer for singleframe temporal action localization", "year": "2021" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b24", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "Pilhyeon Lee; Hyeran Byun", "journal": "", "ref_id": "b25", "title": "Learning action completeness from points for weakly-supervised temporal action localization", "year": "2021" }, { "authors": "Pilhyeon Lee; Youngjung Uh; Hyeran Byun", "journal": "", "ref_id": "b26", "title": "Background suppression network for weakly-supervised temporal action localization", "year": "2020" }, { "authors": "Jin Li; Xianglong Liu; Zhuofan Zong; Wanru Zhao; Mingyuan Zhang; Jingkuan Song", "journal": "", "ref_id": "b27", "title": "Graph attention based proposal 3d convnets for action detection", "year": "2020" }, { "authors": "Yueyang Li; Yonghong Hou; Wanqing Li", "journal": "", "ref_id": "b28", "title": "Sub-action prototype learning for point-level weakly-supervised temporal action localization", "year": "2023" }, { "authors": "Chuming Lin; Jian Li; Yabiao Wang; Ying Tai; Donghao Luo; Zhipeng Cui; Chengjie Wang; Jilin Li; Feiyue Huang; Rongrong Ji", "journal": "", "ref_id": "b29", "title": "Fast learning of temporal action proposal via dense boundary generator", "year": "2020" }, { "authors": "Chuming Lin; Chengming Xu; Donghao Luo; Yabiao Wang; Ying Tai; Chengjie Wang; Jilin Li; Feiyue Huang; Yanwei Fu", "journal": "", "ref_id": "b30", "title": "Learning salient boundary feature for anchorfree temporal action localization", "year": "2021" }, { "authors": "Tianwei Lin; Xu Zhao; Zheng Shou", "journal": "", "ref_id": "b31", "title": "Single shot temporal action detection", "year": "2017" }, { "authors": "Tianwei Lin; Xu Zhao; Haisheng Su; Chongjing Wang; Ming Yang", "journal": "", "ref_id": "b32", "title": "Bsn: Boundary sensitive network for temporal action proposal generation", "year": "2018" }, { "authors": "Tianwei Lin; Xiao Liu; Xin Li; Errui Ding; Shilei Wen", "journal": "", "ref_id": "b33", "title": "Bmn: Boundary-matching network for temporal action proposal generation", "year": "2019" }, { "authors": "Daochang Liu; Tingting Jiang; Yizhou Wang", "journal": "", "ref_id": "b34", "title": "Completeness modeling and context separation for weakly supervised temporal action localization", "year": "2019" }, { "authors": "Liyuan Liu; Xiaodong Liu; Jianfeng Gao; Weizhu Chen; Jiawei Han", "journal": "", "ref_id": "b35", "title": "Understanding the difficulty of training transformers", "year": "2020" }, { "authors": "Qinying Liu; Zilei Wang", "journal": "", "ref_id": "b36", "title": "Progressive boundary refinement network for temporal action detection", "year": "2020" }, { "authors": "Yuan Liu; Lin Ma; Yifeng Zhang; Wei Liu; Shih-Fu Chang", "journal": "", "ref_id": "b37", "title": "Multi-granularity generator for temporal action proposal", "year": "2019" }, { "authors": "Wang Luo; Tianzhu Zhang; Wenfei Yang; Jingen Liu; Tao Mei; Feng Wu; Yongdong Zhang", "journal": "", "ref_id": "b38", "title": "Action unit memory network for weakly supervised temporal action localization", "year": "2021" }, { "authors": "Zhekun Luo; Devin Guillory; Baifeng Shi; Wei Ke; Fang Wan; Trevor Darrell; Huijuan Xu", "journal": "Springer", "ref_id": "b39", "title": "Weakly-supervised action localization with expectation-maximization multiinstance learning", "year": "2020" }, { "authors": "Fan Ma; Linchao Zhu; Yi Yang; Shengxin Zha; Gourab Kundu; Matt Feiszli; Zheng Shou", "journal": "Springer", "ref_id": "b40", "title": "Sf-net: Single-frame supervision for temporal action localization", "year": "2020" }, { "authors": "Junwei Ma; Krishna Satya; Maksims Gorti; Guangwei Volkovs; Yu", "journal": "", "ref_id": "b41", "title": "Weakly supervised action selection learning in video", "year": "2021" }, { "authors": "Elaheh Karthik Mahadevan; Sowmya Sanoubari; James E Somanath; Ehud Young; Sharlin", "journal": "", "ref_id": "b42", "title": "Av-pedestrian interaction design using a pedestrian mixed traffic simulator", "year": "2019" }, { "authors": "Kyle Min; Jason J Corso", "journal": "Springer", "ref_id": "b43", "title": "Adversarial background-aware loss for weakly-supervised temporal activity localization", "year": "2020" }, { "authors": "Davide Moltisanti; Sanja Fidler; Dima Damen", "journal": "", "ref_id": "b44", "title": "Action recognition from single timestamp supervision in untrimmed videos", "year": "2019" }, { "authors": "Sanath Narayan; Hisham Cholakkal; Fahad Shahbaz Khan; Ling Shao", "journal": "", "ref_id": "b45", "title": "3c-net: Category count and center loss for weakly-supervised action localization", "year": "2019" }, { "authors": "Sanath Narayan; Hisham Cholakkal; Munawar Hayat; Fahad Shahbaz Khan; Ming-Hsuan Yang; Ling Shao", "journal": "", "ref_id": "b46", "title": "D2-net: Weakly-supervised action localization via discriminative embeddings and denoised activations", "year": "2021" }, { "authors": "Megha Nawhal; Greg Mori", "journal": "", "ref_id": "b47", "title": "Activity graph transformer for temporal action localization", "year": "2021" }, { "authors": "Alejandro Pardo; Humam Alwassel; Fabian Caba; Ali Thabet; Bernard Ghanem", "journal": "", "ref_id": "b48", "title": "Refineloc: Iterative refinement for weakly-supervised action localization", "year": "2021" }, { "authors": "Sanqing Qu; Guang Chen; Zhijun Li; Lijun Zhang; Fan Lu; Alois Knoll", "journal": "", "ref_id": "b49", "title": "Acm-net: Action context modeling network for weakly-supervised temporal action localization", "year": "2021" }, { "authors": "Amir Rasouli; John K Tsotsos ", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b50", "title": "Autonomous vehicles that interact with pedestrians: A survey of theory and practice", "year": "2019" }, { "authors": "Wenfei Huan Ren; Tianzhu Yang; Yongdong Zhang; Zhang", "journal": "", "ref_id": "b51", "title": "Proposal-based multiple instance learning for weakly-supervised temporal action localization", "year": "2023" }, { "authors": "Mamshad Nayeem Rizve; Gaurav Mittal; Ye Yu; Matthew Hall; Sandra Sajeev; Mubarak Shah; Mei Chen", "journal": "", "ref_id": "b52", "title": "Pivotal: Prior-driven supervision for weakly-supervised temporal action localization", "year": "2023" }, { "authors": "Dingfeng Shi; Yujie Zhong; Qiong Cao; Lin Ma; Jia Li; Dacheng Tao", "journal": "", "ref_id": "b53", "title": "Tridet: Temporal action detection with relative boundary modeling", "year": "2023" }, { "authors": "Krishna Kumar; Singh ; Yong Jae; Lee ", "journal": "IEEE", "ref_id": "b54", "title": "Hide-and-seek: Forcing a network to be meticulous for weakly-supervised object and action localization", "year": "2017" }, { "authors": "Elahe Vahdani; Yingli Tian", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b55", "title": "Deep learning-based action detection in untrimmed videos: A survey", "year": "2023" }, { "authors": "Binglu Wang; Yongqiang Zhao; Le Yang; Teng Long; Xuelong Li", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b56", "title": "Temporal action localization in the deep learning era: A survey", "year": "2023" }, { "authors": "Limin Wang; Bingkun Huang; Zhiyu Zhao; Zhan Tong; Yinan He; Yi Wang; Yali Wang; Yu Qiao", "journal": "", "ref_id": "b57", "title": "Videomae v2: Scaling video masked autoencoders with dual masking", "year": "2023" }, { "authors": "Yu Wang; Yadong Li; Hongbin Wang", "journal": "", "ref_id": "b58", "title": "Two-stream networks for weakly-supervised temporal action localization with semantic-aware mechanisms", "year": "2023" }, { "authors": "Mengmeng Xu; Chen Zhao; David S Rojas; Ali Thabet; Bernard Ghanem", "journal": "", "ref_id": "b59", "title": "G-tad: Sub-graph localization for temporal action detection", "year": "2020" }, { "authors": "Le Yang; Junwei Han; Tao Zhao; Tianwei Lin; Dingwen Zhang; Jianxin Chen", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b60", "title": "Background-click supervision for temporal action localization", "year": "2021" }, { "authors": "Wenfei Yang; Tianzhu Zhang; Xiaoyuan Yu; Tian Qi; Yongdong Zhang; Feng Wu", "journal": "", "ref_id": "b61", "title": "Uncertainty guided collaborative training for weakly supervised temporal action detection", "year": "2021" }, { "authors": "Yu Yao; Xizi Wang; Mingze Xu; Zelin Pu; Ella Atkins; David Crandall", "journal": "", "ref_id": "b62", "title": "When, where, and what? a new dataset for anomaly detection in driving videos", "year": "2020" }, { "authors": "Runhao Zeng; Chuang Gan; Peihao Chen; Wenbing Huang; Qingyao Wu; Mingkui Tan", "journal": "IEEE Transactions on Image Processing", "ref_id": "b63", "title": "Breaking winner-takes-all: Iterative-winners-out networks for weakly supervised temporal action localization", "year": "2019" }, { "authors": "Runhao Zeng; Wenbing Huang; Mingkui Tan; Yu Rong; Peilin Zhao; Junzhou Huang; Chuang Gan", "journal": "", "ref_id": "b64", "title": "Graph convolutional networks for temporal action localization", "year": "2019" }, { "authors": "Yuanhao Zhai; Le Wang; Wei Tang; Qilin Zhang; Junsong Yuan; Gang Hua", "journal": "Springer", "ref_id": "b65", "title": "Two-stream consensus network for weakly-supervised temporal action localization", "year": "2020" }, { "authors": "Can Zhang; Meng Cao; Dongming Yang; Jie Chen; Yuexian Zou", "journal": "", "ref_id": "b66", "title": "Cola: Weakly-supervised temporal action localization with snippet contrastive learning", "year": "2021" }, { "authors": "Chen-Lin Zhang; Jianxin Wu; Yin Li", "journal": "Springer", "ref_id": "b67", "title": "Actionformer: Localizing moments of actions with transformers", "year": "2022" }, { "authors": "Da Zhang; Xiyang Dai; Xin Wang; Yuan-Fang Wang", "journal": "", "ref_id": "b68", "title": "S3d: single shot multi-span detector via fully 3d convolutional networks", "year": "2018" }, { "authors": "Chen Zhao; Ali K Thabet; Bernard Ghanem", "journal": "", "ref_id": "b69", "title": "Video selfstitching graph network for temporal action localization", "year": "2021" }, { "authors": "Jia-Xing Zhong; Nannan Li; Weijie Kong; Tao Zhang; Thomas H Li; Ge Li", "journal": "", "ref_id": "b70", "title": "Step-by-step erasion, one-by-one collection: a weakly supervised temporal action detector", "year": "2018" }, { "authors": "Jingqiu Zhou; Linjiang Huang; Liang Wang; Si Liu; Hongsheng Li", "journal": "", "ref_id": "b71", "title": "Improving weakly supervised temporal action localization by bridging train-test gap in pseudo labels", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 308.86, 477.45, 236.25, 23.65 ], "formula_id": "formula_0", "formula_text": "Z = {Z 1 , Z 2 , • • • , Z L } where Z l ∈ R T l ×D" }, { "formula_coordinates": [ 3, 357.42, 701.01, 141.38, 12.84 ], "formula_id": "formula_1", "formula_text": "Pl [t, c] = P l [t, c](1 -P l [t, C + 1])." }, { "formula_coordinates": [ 4, 101.36, 195.11, 185.01, 13.38 ], "formula_id": "formula_2", "formula_text": "Φ = {([t i -r a , t i + r a ], y i )} Nact i=1 .(2)" }, { "formula_coordinates": [ 4, 80.47, 293.43, 175.04, 30.2 ], "formula_id": "formula_3", "formula_text": "Φ l = {([(t i /θ l ) -r a , (t i /θ l ) + r a ], y i )} Nact i=1 = {(t j , y j )} N l j=1 ." }, { "formula_coordinates": [ 4, 50.11, 430.75, 241.18, 40.79 ], "formula_id": "formula_4", "formula_text": "L MIL = - 1 L L l=1 C c=1 y[c] log( Pl [c])+(1-y[c]) log(1-Pl [c]).(4)" }, { "formula_coordinates": [ 4, 56.02, 547.46, 223.57, 44.37 ], "formula_id": "formula_5", "formula_text": "L Act = - 1 N ⋆ act L l=1 Nl j=1 C c=1 y j [c] log( Pl [t j , c])(1 -Pl [t j , c]) γ -(1 -y j [c]) log(1 -Pl [t j , c]) Pl [t j , c] γ ." }, { "formula_coordinates": [ 4, 278.62, 591.11, 7.74, 12 ], "formula_id": "formula_6", "formula_text": ")5" }, { "formula_coordinates": [ 4, 313.4, 91.94, 231.71, 45.8 ], "formula_id": "formula_7", "formula_text": "L BG = - 1 M bg L l=1 M l j=1 C c=1 ( Pl [b j , c]) γ log(1 -Pl [b j , c]) + (1 -p l (b j )) γ log p l (b j ).(6)" }, { "formula_coordinates": [ 4, 343.01, 201.43, 202.11, 12.66 ], "formula_id": "formula_8", "formula_text": "L Total = λ MIL L MIL + λ Act L Act + λ BG L BG .(7)" }, { "formula_coordinates": [ 4, 383.43, 630.89, 54.76, 12.32 ], "formula_id": "formula_9", "formula_text": "β i = [b s i , b e i ]." }, { "formula_coordinates": [ 5, 50.99, 95.91, 234.5, 19.26 ], "formula_id": "formula_10", "formula_text": "t ⋆ i = argmax t ( PL [t, c]) for t ∈ (β i ∩ [t i -δd i , t i + δd i ])." }, { "formula_coordinates": [ 5, 105.87, 247.14, 180.5, 24.64 ], "formula_id": "formula_11", "formula_text": "G(t, µ, σ) = 1 σ √ 2π e -1 2 ( t-µ σ ) 2(9)" }, { "formula_coordinates": [ 5, 87.19, 334.04, 199.18, 13.02 ], "formula_id": "formula_12", "formula_text": "u b = max(t ⋆ i -b s i , b e i -t ⋆ i ), l b = 10 -6 .(10)" }, { "formula_coordinates": [ 5, 50.11, 370.94, 236.25, 23.96 ], "formula_id": "formula_13", "formula_text": "[l b , u b ] to fit Gaussian distribution G(t, t ⋆ i , σ i ) to probability signal PL [t, c]." }, { "formula_coordinates": [ 5, 72.62, 436.7, 209.6, 27.09 ], "formula_id": "formula_14", "formula_text": "L G-fit MSE = t∈βi α • G(t, t ⋆ i , σ i ) -PL [t, c] 2 . (11" }, { "formula_coordinates": [ 5, 282.21, 441.25, 4.15, 12 ], "formula_id": "formula_15", "formula_text": ")" }, { "formula_coordinates": [ 5, 91.45, 561.45, 190.76, 27.09 ], "formula_id": "formula_16", "formula_text": "L U-fit MSE = t∈βi U (t ⋆ i , ω i ) -PL [t, c] 2 . (12" }, { "formula_coordinates": [ 5, 282.21, 566.01, 4.15, 12 ], "formula_id": "formula_17", "formula_text": ")" }, { "formula_coordinates": [ 5, 50.11, 634.84, 98.48, 12.32 ], "formula_id": "formula_18", "formula_text": "I i = [t ⋆ i -∆ i , t ⋆ i + ∆ i ]." }, { "formula_coordinates": [ 5, 53.63, 689.8, 232.73, 24 ], "formula_id": "formula_19", "formula_text": "Ψ = {(t i , σ i , I i , y i )} Nact i=1 where I i = [t ⋆ i -∆ i , t ⋆ i + ∆ i ].(13)" }, { "formula_coordinates": [ 5, 369.36, 565.9, 171.6, 30.49 ], "formula_id": "formula_20", "formula_text": "L σ MSE = 1 N act Nact i=1 (σ i -σi ) 2 . (14" }, { "formula_coordinates": [ 5, 540.96, 574.08, 4.15, 12 ], "formula_id": "formula_21", "formula_text": ")" }, { "formula_coordinates": [ 5, 371.7, 668.52, 169.27, 17.4 ], "formula_id": "formula_22", "formula_text": "G i (t, t i , σi ) = e -1 2 t-t i σi 2 . (15" }, { "formula_coordinates": [ 5, 540.96, 673.86, 4.15, 12 ], "formula_id": "formula_23", "formula_text": ")" }, { "formula_coordinates": [ 6, 62.45, 434.3, 211.58, 12.33 ], "formula_id": "formula_24", "formula_text": "G c (t) = max {G i (t, t i , σi )| i ∈ [1, N act ], y i [c] = 1} ." }, { "formula_coordinates": [ 6, 67.31, 516.7, 214.9, 30.59 ], "formula_id": "formula_25", "formula_text": "L G MSE = 1 T 1 |S c | c∈Sc T1 t=1 G c (t) -P1 [t, c] 2 . (17" }, { "formula_coordinates": [ 6, 282.21, 524.82, 4.15, 12 ], "formula_id": "formula_26", "formula_text": ")" }, { "formula_coordinates": [ 6, 345.73, 435.55, 195.24, 26.84 ], "formula_id": "formula_27", "formula_text": "L Total =λ MIL L MIL + λ Act L Act + λ BG L BG + λ G L G MSE + λ σ L σ MSE . (18" }, { "formula_coordinates": [ 6, 540.96, 448.76, 4.15, 12 ], "formula_id": "formula_28", "formula_text": ")" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b54", "b24", "b39", "b18", "b8", "b63", "b39", "b57", "b65", "b56", "b69", "b57", "b35", "b42", "b34", "b68", "b64", "b24", "b51", "b15", "b31", "b11", "b36", "b48", "b9", "b26" ], "table_ref": [], "text": "Traffic signal control (TSC) is an important and challenging real-world problem, which is central to urban congestion alleviation and improving traffic system efficiency. Over the years, the traffic signal control problem has attracted considerable attention from research communities. Although widely used in practice, conventional transportation engineering methods for TSC (Webster 1958;Hunt et al. 1982;Lowrie 1990;Gartner et al. 1991;Cools, Gershenson, and D'Hooghe 2013;Yan et al. 2019;Lu et al. 2023) heavily rely on domain knowledge (such as pre-defined rules) and field calibration, which are not flexible enough to handle highly dynamic traffic conditions and can be costly for large-scale implementation. On the other hand, the recently emerged reinforcement learning (RL) based TSC methods enable direct policy learning from environment interactions without making strong assumptions about the traffic, thus holding great promise for achieving general-purpose and fully adaptive traffic signal control (Wei et al. 2018;Zheng et al. 2019b;Zang et al. 2020;Wei et al. 2021). However, while RL-based methods have achieved impressive performance in traffic simulation environments, there have been few successful attempts to deploy them in the real world.\nThe poor real-world applicability of existing RL-based TSC methods primarily stems from two notable issues. First, real-world intersections are complicated open systems with many influencing factors, such as heterogeneous driving and pedestrian behaviors, making realistic simulation difficult. Most RL-based TSC methods are built upon conventional online RL framework, which heavily relies on extensive interactions in idealized traffic simulations, and suffers from severe sim-to-real transfer issues (Zhang and Deng 2023). Second, the data collected from real-world intersections are typically coarse-grained, providing much less information as compared to the full-state observation obtainable from traffic simulators. For example, many existing RL-based models leverage highly informative state features from simulators for signal control, such as the position of vehicles (Van der Pol and Oliehoek 2016; Wei et al. 2018;Liang et al. 2019;Mo et al. 2022) and queue lengths (Li, Lv, and Wang 2016;Zheng et al. 2019b;Zhang et al. 2019;Yoon et al. 2021), however, such information is typically not directly obtainable from the widely used detectors for real-world traffic control systems (Hunt et al. 1982;Sims and Dobinson 1980).\nThe above issues necessitate the need for developing a completely data-driven TSC framework to bypass the limitations of traffic simulators and improve deployment feasibility. The recently emerged offline RL (Fujimoto, Meger, and Precup 2019;Levine et al. 2020) provides an attractive paradigm for RL-based TSC policy learning using historical data from real-world intersections. However, adopting an offline RL solution for TSC also faces two major chal-lenges. First, the fine-grained reward signal is hard to obtain from real signalized intersections. Important evaluation metrics such as queue lengths and delays, although easily obtainable in simulation, are not monitored nor collected in real-world signal control data, providing no reward information for data-driven optimization. Second, the amount of data we can obtain from real signalized intersections may be very limited. For instance, if we sample in 5-minute intervals, a whole month's historical traffic data of an intersection will only correspond to about 8,600 state-action samples, much smaller in dataset size as compared to typical offline RL benchmark tasks (Fu et al. 2020).\nTo tackle the above challenges, we develop a fully Data-Driven framework for real-world Traffic Signal Control (D2TSC). Our framework combines the merits of both wellestablished traffic flow theories and a state-of-the-art offline RL algorithm to achieve sample-efficient and deploymentfriendly TSC policy learning. Specifically, we first develop a data-driven reward inference model based on shockwave theory (Lighthill and Whitham 1955;Richards 1956;Daganzo 1997;Jin 2015) and Gaussian process interpolation process using realistic coarse-grained intersection data. With the learned reward, we develop an in-sample learning offline RL method with customized state and action encoding design according to real-world TSC data characteristics, as well as data augmentation for sample-efficient policy learning.\nTo evaluate our framework, we collect real-world data from a signalized intersection in China and build a highly customized simulation environment with state observations strictly following the real-world detection (e.g., 5-minute traffic flow and spatial vehicle count of each lane in the 150m range every 5 seconds monitored by license-plate recognition cameras). We generate 3 months of data based on the actual traffic flows and timing plans and train all models based on these limited data. Numerical experiments show that our framework outperforms all other baselines and exhibits very good deployability, given its sample efficiency, compatibility with coarse-grained TSC data, and capability to infer reward signals directly from the data." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b58", "b5", "b0", "b57", "b49", "b46", "b56", "b50", "b55", "b25", "b7", "b6", "b10", "b65", "b30", "b69" ], "table_ref": [], "text": "Traffic Signal Control Using RL RL methods allow direct learning of an optimized policy through interactions with the unknown environment. This nice property has long attracted researchers to apply RL to solve traffic signal control problems. Early works (Wiering et al. 2004;Cai, Wong, and Heydecker 2009;Abdulhai, Pringle, and Karakoulas 2003) use the tabular Q-learning method to solve highly simplified problems, with states required to be discrete and low-dimensional. With the development of deep RL, many recent works leverage highly expressive deep neural networks to model more complex and information-rich state inputs to improve TSC performance (Wei et al. 2018;Zheng et al. 2019a,b;Rodrigues and Azevedo 2019;Oroojlooy et al. 2020;Wei et al. 2021;Shabestary and Abdulhai 2022). Other advanced modeling tools have also been adopted to improve the signal control scalability and robustness. For example, multi-agent RL and graph neural networks have been adopted to scale the TSC control from a single intersection to multiple intersections in a road network (Wei et al. 2019;Iqbal and Sha 2019;Chu et al. 2019;Chen et al. 2020;Devailly, Larocque, and Charlin 2021); meta-learning has also been explored for RLbased TSC (Zang et al. 2020), to help the learned model adapt quickly and stably in new traffic scenarios.\nAlthough the aforementioned methods achieve impressive performance in simulation environments, they are still far from being able to deploy in real-world scenarios. Their poor practical feasibility mainly stemmed from the over-reliance on the traffic simulation environments, as analyzed in our Introduction section. Recently, there have been some attempts (Kunjir and Chawla 2022;Zhang and Deng 2023) to adopt offline RL methods to solve the TSC problem, which allows policy learning from pre-collected offline datasets. However, these studies still use unrealistic fine-grained state inputs and ground-truth reward information that are only obtainable from simulation environments for modeling, completely deviating from their original intention of offline learning. In this study, we address the drawbacks of the previous works by proposing a novel offline RL framework for traffic signal control, which enables completely simulatorfree and deployment-friendly policy learning using realistic TSC data." }, { "figure_ref": [], "heading": "Offline Reinforcement Learning", "publication_ref": [ "b31", "b27", "b67", "b15", "b29", "b15", "b29", "b15", "b29", "b2", "b1", "b45", "b61", "b28", "b60", "b17" ], "table_ref": [], "text": "Reinforcement learning provides a promising way to solve TSC problems. However, existing studies primarily adopt online RL methods, in which the control policy is learned via interacting with the environment following a trial-and-error paradigm. However, the requirement of online interactions in online RL inhibits the successful applications of RL methods in many complex real-world tasks, as interaction with the real system during policy learning can be costly, unsafe, or ethically problematic, and high-fidelity simulators are also hard to build (Levine et al. 2020;Kiran et al. 2021;Zhan et al. 2022). On the contrary, the recently emerged offline RL methods smartly tackle the challenges of online RL via optimizing policies using only pre-collected offline datasets without further online interactions (Fujimoto, Meger, and Precup 2019;Wu, Tucker, and Nachum 2019;Kumar et al. 2020). Therefore, offline RL holds great promise to utilize existing historical operational data recorded in existing TSC control systems to facilitate signal control optimization.\nHowever, the absence of online interactions also poses new technical difficulties for offline policy learning. Concretely, directly applying online RL methods in the offline setting faces significant training instability when performing model evaluation on samples outside the dataset distribution (also referred to as out-of-distribution (OOD)), where estimation errors can quickly build up and cause the issue of distributional shift and severe value overestimation (Fujimoto, Meger, and Precup 2019;Kumar et al. 2020). One straightforward approach to address these problems is through policy constraints, which incorporates a constraint into the policy learning to prevent excessive deviation of the optimized policy from the behavior policy (the policy present in the offline dataset) (Fujimoto, Meger, and Precup 2019;Wu, Tucker, and Nachum 2019;Li et al. 2023b). There are also some value regularization methods that penalize the value function to assign low values at OOD regions to mitigate overestimation errors (Kumar et al. 2020;Bai et al. 2021;An et al. 2021;Niu et al. 2022). Although effective, these methods still need to evaluate the value function on policyinduced, possibly OOD actions, which can induce potential instability. Recently, in-sample learning has emerged as a promising alternative for training offline RL without the need to query values on OOD samples (Xu et al. 2023;Kostrikov, Nair, and Levine 2021;Xu et al. 2022;Garg et al. 2023), thereby effectively addressing the issues of overestimation errors in OOD regions. In our study, we also adopt the in-sample learning offline RL framework, with customized state-action encoding designs and a sampleefficient data augmentation scheme tailored to realistic TSC optimization problems." }, { "figure_ref": [], "heading": "Preliminary Markov Decision Process", "publication_ref": [ "b37", "b40", "b13", "b21", "b37", "b13", "b21", "b40" ], "table_ref": [], "text": "The RL problem is typically formulated as a Markov Decision Process (MDP), modeled by a tuple ⟨S, A, r, γ, P, ρ⟩. S and A denote the state and action space. r : S × A → R is the reward function. γ ∈ (0, 1) denotes the discount factor. P : S × A → S represents the transition dynamic and ρ is the initial state distribution. The goal of RL is to find an optimal policy π * : S → A that can maximize the expected discounted cumulative rewards:\nmax π E ∞ t=0 γ t r(s t , a t )|s 0 ∼ ρ, a t ∼ π, s t+1 ∼ P . (1)\nThe expected discounted cumulative reward presented in Eq. ( 1) is typically expressed as a state-value function\nV π (s) := E [ ∞ t=0 γ t r(s t , a t )|s 0 = s, a t ∼ π, s t+1 ∼ P] or an action-value function Q π (s, a) := E[ ∞ t=0 γ t r(s t , a t )| s 0 = s, a 0 = 0, a t+1 ∼ π, s t+1 ∼ P].\nIn practice, modern deep RL methods typically approximate the action-value function Q π (s, a) using deep neural networks by minimizing the squared Bellman error (Lillicrap et al. 2016;Mnih et al. 2016;Fujimoto, Hoof, and Meger 2018;Haarnoja et al. 2018):\nQ π = arg min Q E (s,a,r,s ′ )∼D [(r(s, a) +γE a ′ ∼π(•|s ′ ) Q(s ′ , a ′ ) -Q(s, a)) 2 ],(2)\nwhere, D is a data buffer (also known as replay buffer) that contains historical transitions D := {(s, a, s ′ , r) i } gradually filled during online interactions, or a fixed offline dataset in the offline RL setting. With the estimated value function, most RL methods learn an optimized policy π(a|s) by maximizing the value function as:\nmax π E s∼D,a∼π(•|s) [Q π (s, a)] .\n(3)\nBy repeatedly alternating between Eq. (2-3), the reward maximization objective stated in Eq. ( 1) can be approximately solved (Lillicrap et al. 2016;Fujimoto, Hoof, and Meger 2018;Haarnoja et al. 2018;Mnih et al. 2016)." }, { "figure_ref": [], "heading": "Offline Reinforcement Learning", "publication_ref": [ "b15", "b29", "b15", "b29", "b62", "b15", "b29", "b61", "b61", "b19", "b21", "b61", "b61", "b28", "b17" ], "table_ref": [], "text": "Under the offline RL setting, the dataset D introduced above is fixed and is pre-collected by some unknown behavior policies µ(a|s) without the possibility of further interacting with the environment. In this case, directly adopting online RL methods in Eq. (2-3) will lead to severe instability caused by distributional shift (Fujimoto, Meger, and Precup 2019) and approximation error accumulations (Kumar et al. 2020). Specifically, maximizing the action value in Eq. ( 3) may cause the learned policy to deviate from the offline data distribution to some OOD regions. This will introduce approximation errors when learning the action-value function in Eq. ( 2). Such errors will quickly build up and cause severe value overestimation if no regularization is used to stabilize the training, leading to training instability and policy learning failures (Fujimoto, Meger, and Precup 2019;Kumar et al. 2020;Wu, Tucker, and Nachum 2019;Xu et al. 2021).\nTo combat these challenges, the most straightforward way is to regularize the optimized policy to stay close to the data distribution (Fujimoto, Meger, and Precup 2019;Kumar et al. 2020;Xu et al. 2023;Wu, Tucker, and Nachum 2019). By doing so, the maximization over the value function in Eq. ( 3) is performed within the offline data distribution, thus avoiding value overestimation at OOD regions. A neat choice to achieve this is to augment the standard RL objective in Eq. ( 1) with a behavior regularization term and solves a behavior regularized MDP (Xu et al. 2023):\nmax π E ∞ t=0 γ t r(s t , a t ) -αf π(a t |s t ) µ(a t |s t ) ,(4)\nwhere f (•) can be any f -function from f -divergences. This objective aims to maximize the cumulative return while minimizing the deviation of the optimized policy π to the behavior policy µ by minimizing the f -divergence regularization term f (π/µ), thereby ensuring the learned policy stays close to the data distribution and avoids the distributional shift issue. Akin to the treatment in max-entropy RL (Haarnoja et al. 2017(Haarnoja et al. , 2018)), the most direct approach to solving the behavior regularized MDP is to augment a behavior regularization term f (π/µ) upon the original objectives in Eq. ( 2) and Eq. (3):\nmin Q E (s,a,s ′ )∼D r(s, a) + γE a ′ ∼π(•|s ′ ) Q(s ′ , a ′ ) -αf π(a ′ |s ′ ) µ(a ′ |s ′ ) -Q(s, a) 2 ,(5)\nmax π E s∼D,a∼π Q(s, a) -αf π(a|s) µ(a|s) .(6)\nIntuitively, the augmented action-value learning step in Eq. ( 5) penalizes the action-value Q(s, a) at the regions that undergo large distributional shift measured by the fdivergence. Meanwhile, the augmented policy learning step in Eq. ( 6) seeks to maximize the value function and force the policy to stay close to the behavior policy using the behavior regularizer. More importantly, (Xu et al. 2023) recently demonstrates that the objectives in Eq. (5-6) can equivalently transfer to a broad class of SOTA in-sample learning offline RL methods (Xu et al. 2023;Kostrikov, Nair, and Levine 2021;Garg et al. 2023), which enjoy great training stability and SOTA performances. In this paper, we also resort to the in-sample version of Eq. (5-6) for its superior performances (Section )." }, { "figure_ref": [], "heading": "Traffic Signal Control Using Real-World Data", "publication_ref": [], "table_ref": [], "text": "Before introducing our framework, we first provide the terminologies and notations in TSC optimization: • Traffic movement: A traffic movement is defined as traffic moving across the intersection towards a certain direction, such as left-turn, straight, and right-turn. • Signal phase and phase order: a signal phase p is a continuous period during which the traffic signal for a specific traffic movement displays the same condition (red or green). For example, the activation of the \"North-South Straight\" phase illuminates green signals for the southbound and northbound straight lanes, while other entrance lanes show red signals. We denote P as the set of all phases at an intersection. This paper focuses on the main signal phases, ignoring the transition phases like yellow and all-red. A phase order (a.k.a. phase structure) refers to a sequence consisting of specific signal phases that are activated sequentially and cyclically. • Signal timing plan: signal timing plan is a collection of parameters and logic to allocate the right-of-way at a signalized intersection. It specifies the signal cycle length T c and the green T p g and red times T p r of each phase p ∈ P. In modern signalized intersections, traffic condition data are usually collected by video cameras mounted on top of traffic light arms. As shown in Figure 1(a), the camera monitors a limited range (typically about 150m) behind the stopline of each entrance lane. It records the spatial vehicle count x n within the coverage area and counts the total flow x f at some fixed time intervals (in our real-world data, x n and x f are recorded in 5-minute intervals). Additionally, signal timing plan x c of each signal cycle is also obtained from the signal controller. As can be noted, the real-world obtainable traffic detection data are very coarse-grained. Informative states (e.g., fine-grained queue lengths at each lane) and important performance measures (e.g., intersection delay) for signal control optimization are not directly available, despite that such information is easily accessible in traffic simulators." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "To develop a deployable RL-based solution for real-world traffic signal control, one needs to address two core technical challenges: 1) optimizing a TSC policy without directly available reward information; and 2) learning with coarsegrained and potentially limited offline data. In this paper, we develop the D2TSC framework to tackle the above challenges, which is illustrated in Figure 1. D2TSC leverages real-world intersection data for TSC and consists of two key modules: 1) a reward inference model that extracts queue lengths and delay-based rewards from the coarse-grained TSC data by combining traffic flow theory and machine learning; and 2) a sample-efficient offline RL method that enables stable policy learning from coarse-grained and limited real-world data." }, { "figure_ref": [], "heading": "State and action designs", "publication_ref": [], "table_ref": [], "text": "1) States: We sample both the traffic flows\nx f t = x f t,1 , • • • , x f t,L ∈ R 1×L and spatial vehicle counts x n t = x n t,1 , • • • , x n t,L ∈ R 1×L\nfor each lane in 5-min intervals as a part of the states in our RL problem, where L is the set of all lanes in the intersection. While these features provide coarse-grained information, they are considerably easier to obtain from real-world traffic sensors as compared to information-rich state observations that are only obtainable from traffic simulators. Overall, the raw state features include:\ns t = [x f t , x n t ] ∈ R 1×2L . (7)\n2) Actions: For general |P|-phase intersections, we record the cycle length T c ∈ R, and the green time ratio for each phase of the timing plan\nT g = [T 1 g , T 2 g , ..., T |P|-1 g , 1 - |P|-1 p=1 T p g ] ∈ R 1×|P|\n, where T p g denotes the green time ratio of the p-th phase of the timing plan. Therefore, the green time ratios should sum to 1 and be positive. Overall, we can uniquely define a signal timing plan given a specific cycle length T c and the green time ratio for each phase T g , and the action features at step t include: 1+|P|) .\na t = [T c , T g ] ∈ R 1×(\n(8)" }, { "figure_ref": [ "fig_1", "fig_2", "fig_2" ], "heading": "Reward Inference", "publication_ref": [ "b44" ], "table_ref": [], "text": "A key difficulty of using data-driven RL for realistic traffic signal control lies in the absence of ground truth reward information. Performance evaluation metrics that are crucial for TSC, such as reduction in queue lengths and intersection delays, are not directly available from real-world observational data. To address the problem, we propose a novel reward inference model by combining domain knowledge and machine learning. As illustrated in Figure 1 vehicles leave the lane, whereas during the green time, the process of vehicle departure can be divided into two stages (Zhan, Li, and Ukkusuri 2020). In the first stage, the queued vehicles are released, constituting the saturated flow with a flow rate of v s > v n . This stage occurs from the beginning of green time until all queued vehicles are fully dissipated (T r < t ≤ T r + τ ). Once all queued vehicles have left (T r + τ < t ≤ T c ), the departure rate becomes the same as the arrival rate v n . Thus, the arrival and departure processes are expressed as:\nA t = ξ 0 + v n t 0 ≤ t ≤ T c (9) D t =        0 0 ≤ t ≤ T r v s (t -T r ) T r < t ≤ T r + τ v s τ + v n (t -T r -τ ) T r + τ < t ≤ T c x f t = T c(\n10) Ideally, due to flow conservation, the number of vehicles in a lane at time step t is equal to the difference between the cumulative arrival and departure curves (i.e., A t -D t ). However, in real spatial vehicle count data x n t , we can only observe vehicles located in the detection range with length L dr . To account for this restriction, we model the theoretical spatial vehicle count ξ t := Ãd t -D t by introducing the concept of detected cumulative arrival Ãd t , which refers to the cumulative number of vehicles that have entered the detection range.\nObviously, as illustrated in Figure 2(b), we have\nÃd t ≤ A t (11)\nas the observed cumulative arrival at each lane does not exceed the actual value. Also, according to the Newell's kinetic theory (Newell 1993), the detected cumulative arrival Ãd t and the cumulative departure D t within a detected lane section of a given length have the following relationship:\nÃd t ≤ D t-t S + L dr k j (12\n)\nwhere t S is the shockwave propagation time required for traversing the detection range with a length of L dr , i.e., t S = L dr /w 2 . k j is the jam density. They are hyperparameters that reflect the inherent traffic flow characteristics of the laneand can be derived from the fundamental diagram (FD), which will be detailed in Section . Due to the limited detection range, it is common for the lane queue length to exceed the detection range during the red time, resulting in a \"local spillback\". In the event of a local spillback, newly arriving vehicles cannot enter the detected range until the queue tail begins to dissipate. Consequently, Eq. ( 12) specifies that the detected cumulative arrival at any time t should be constrained by its cumulative departure condition of a shockwave propagation time t S earlier. Combining Eqs. ( 11) and ( 12), the detected cumulative arrival Ãd t of each lane can be approximated as\nÃd t ≈ min{A t , D t-t S + L dr k j }(13)\nBy substituting Eqs. ( 9) and (10) into Eq. ( 13), we have ξ\nξ t := Ãd t -D t ≈ min{A t -D t , D t-t S -D t + L dr k j } = min{ξ (1) t , ξ (2) t + L dr k j } (14) where ξ (1) t = ξ 0 +      v n t, 0 ≤ t ≤ T r -v s T r + (v n -v s )t, T r < t ≤ T r + τ v n T c -x f , T r + τ < t ≤ T c (15)\n(2) t =                              0, 0 ≤ t ≤ Tr -vs(t -Tr), Tr < t ≤ Tr + t S , t S < τ -vst S , Tr + t S < t ≤ Tr + τ, t S < τ -vs(t -Tr), Tr < t ≤ Tr + τ, t S ≥ τ vnTc -vs(t S + Tr) -x f + (vs -vn)t, Tr + τ < t ≤ Tr + τ + t S , t S < Tg -τ -vnt S , Tr + τ + t S < t ≤ Tc, t S < Tg -τ -x f + vnTc -vnt, Tr + τ < t ≤ Tr + t S , t S ≥ Tg -τ -vs(t S + Tr) + vnTc -x f + (vs -vn)t Tr + t S < t ≤ Tc, t S ≥ Tg -τ (16) and τ = x f -v n T g v s -v n ∈ [0, T g ](17)\nEq. ( 14) introduces a parameterized model for the lane queuing process in a signal cycle with a partial detection range. This model facilitates the comprehensive representation of the theoretical spatial vehicle count ξ t , as a function of a series of traffic flow parameters θ = {v n , v s , ξ 0 }, i.e., ξ t = ξ t (θ).\nExtension to multi-cycle queuing process In order to derive the parameterized queuing process ξ t (θ), it is necessary to have access to the cycle-level traffic flow x f of each lane. However, in practice, the lane-based traffic flow is often collected at fixed time intervals (such as 5 minutes), thereby making it challenging to directly acquire traffic flow for every signal cycle. To address the issue, we provide a simplistic estimation approach to extend the queuing process modeling to multi-cycle situations.\nTo estimate ξ t (θ) using lane-based traffic flow across multiple signal cycles, we maintain the assumption of constant arrival rate within each signal cycle, additionally, we assume that the arrival rate of each signal cycle is proportional to the maximum observed spatial vehicle count in the cycle. Figure 3 illustrates the general queuing process for multi-cycle situations. Suppose that there are C complete signal cycles during the flow detection interval, and, with a slight abuse of notations, let x f represent the total flow in these cycles. We denote the arrival rate, cycle flow, cycle beginning time, red and green durations, and cycle length of the c-th cycle as\nv c n , x f,c , t c r , T c r , T c\ng , and T c c , respectively. Based on the proportionality assumption of cycle arrival rates, we have the following relationship:\nv c n max t c r ≤t<t c+1 r (x n t ) = v c ′ n max t c ′ r ≤t<t c ′ +1 r (x n t ) = ζ, ∀c, c ′ ∈ {1, • • • , C}(18)\nwhere ζ is the normalized arrival rate. According to Figure 3, the cycle and total traffic flows should satisfy the following conservation conditions,\nx f = C i=1 x f,c + x nr (19) x f,c = x n t c r + v c n T c c -x n t c+1 r , ∀c ∈ {1, • • • , C}(20)\nwhere x nr is the remaining spatial vehicle count at the end of the flow detection interval. Equations ( 18)-( 20) yield a solution {x f,c }. Consequently, the general multi-cycle estimation problem can be simplified into multiple independent single-cycle queuing process problems. One can estimate the parameters θ for each lane in each signal cycle by fitting it to the empirical spatial vehicle counts x n t . To this end, we develop a non-parametric approach to find the most likely theoretical spatial queue count curve using a Gaussian process interpolation model in the next section." }, { "figure_ref": [ "fig_1", "fig_1", "fig_3", "fig_1", "fig_1" ], "heading": "Gaussian process interpolation", "publication_ref": [ "b4", "b3", "b9", "b26" ], "table_ref": [], "text": "The Gaussian process interpolation technique is a favorable tool within the realm of Bayesian statistical modeling and machine learning. It establishes a probability distribution over functions, linking output y to input x, wherein the values of y for any set of inputs x 1 , x 2 , • • • , x N jointly conform to a Gaussian distribution. As illustrated in Figure 2(c), we can model the empirical observations x n t as the parameterized theoretical spatial vehicle count curve ξ t (θ) with Gaussian disturbances, i.e., where ε ∼ N (0, I) represents a Gaussian distributed disturbance, and η is a scale hyperparameter. Then it will lead to a Gaussian process:\nx n t (θ) = ξ t (θ) + ηε,(21)\nGP (t|θ) ∼ N (ξ t (θ), Ker(t, t|η))(22)\nwhich is parameterized by the mean process of ξ t (θ) and a covariance kernel matrix Ker(t, t|η)), defined as\nKer(t, t|η) =    ϕ(t 1 , t 1 |η) . . . ϕ(t 1 , t n |η) . . . . . . . . . ϕ(t n , t 1 |η) . . . ϕ(t n , t n |η)   (23)\nwhere ϕ(•|η) is the kernel function used to measure the covariance between time stamps of a pair of observed spatial vehicle count x n ti and x n tj in a signal cycle. We adopt the squared exponential function with the regulation term as the kernel function:\nϕ(t i , t j |η) = h 0 e - t i -t j λ 2 + η 2 δ(t i , t j )(24)\nwhere δ(t i , t j ) = 1 if t i = t j and 0 otherwise, and h 0 , λ are scale hyperparameters. Essentially, x n t (θ) in Eq. ( 21) can be regarded as the observed spatial vehicle count x n t drawn at timestamps t from a multivariate Gaussian distribution parameterized by θ and η. Therefore, the traffic parameters θ of a signal cycle that parameterizes the most likely theoretical spatial vehicle count ξ t given the observations x n t can be learned by maximizing the joint probability distribution of GP (t|θ).\nInferring queuing process parameters θ requires performing maximum likelihood estimation on Gaussian process GP (t|θ). However, the joint probability distribution GP (t|θ) is complex given a non-differentiable piece-wise linear function ξ t (θ) in Eq. ( 14). To this end, we develop a Metropolis-Hastings (M-H) algorithm that efficiently infers θ using the using Markov Chain Monte-Carlo (MCMC) technique. M-H algorithm is a widely used sampling method, which is useful in estimating parameters from complex probability distributions (Bishop and Nasrabadi 2006;Berger 2013). The developed M-H algorithm for learning parameters θ is given in Table 1.\nQueue length and delay estimation Once we have estimated the queuing process parameters θ = {v n , v s , ξ 0 }, Table 1: M-H algorithm for θ inference.\nInput: Timestamps t, the observed spatial vehicle count x n , and the cycle flow x f . Output: Estimated parameters θ\nStep 1: Initialize θ (0)\nStep 2: Sample θ from a uniform distribution\nStep 3: Computing the likelihood of θ and accept it as θ (k) according to the probability of β = min(p(x n | θ)/p(x n |θ (k-1) ), 1)\nStep 4: Repeat steps 2-3 for a given number of iterations until θ (k) is stable\nStep 5: Discard the first 75% of accepted samples as burn-in, and use the mean of remaining values as the result of θ we can further use them to infer the lane-level queue length and delay in each signal cycle. According to the geometric relationship illustrated in Figure 2(a), the maximum queue length q max of each lane can be estimated by\nq max = w 2 (q 0 + w 1 T r ) w 2 -w 1 (25)\nwhere w 1 , w 2 represent the stopping and starting shockwave speed, respectively, and q 0 = ξ 0 k j is the initial queue length at the beginning of the cycle, k j is the jam density of the lane. In the shockwave theory, these parameters can be inferred from the fundamental diagram (FD), which is a common representation of inherent relationship between traffic flow v and density k. In this study, we assume that each lane independently satisfies the following flow-density relationship (Daganzo 1997;Jin 2015), as illustrated in Figure 4:\nv = G(k) = min{uk, w(k j -k)}(26)\nThe FD hyperparameters include the jam density k j > 0, the free-flow speed u > 0, and the shockwave speed w > 0. According to the shockwave theory, the stopping shockwave speed in Figure 2(a) during the red time can be calculated from the slope of the secant line AC on the FD curve,\nw 1 = k AC = v n k n -k j = 1 1/u -k j /v n = - 1 k j (1/v n -1/v s ) + t S /L dr (27)\nHere, w 1 < 0 suggests that the shockwave propagates backward (opposite to the driving direction of vehicles). Similarly, the starting shockwave speed can be obtained by\nw 2 = k BC = -w(28)\nWith w 1 , w 2 and q 0 being parameterized by the estimated queue length parameters θ = {v n , v s , ξ 0 } and static FD hyperparameters {k j , u, w}, we can conveniently calculate maximum queue length q max of each lane in each signal cycle using Eq. ( 25). The cycle-level total delay d of each lane (total waiting time spent in the queue) can thus be estimated as the area enclosed by the shockwave boundaries as shown in Figure 2(a), mathematically,\nd = 1 2 T r q max + q 0 (w 2 T r + q 0 ) w 2 -w 1 (29)\nReward function parameterization To be compatible with coarse-grained traffic flow data, we design our RL agent to make decisions in 5-min intervals. Suppose that there are C t complete signal cycles within the t-th 5-min interval, we define the reward as the average vehicle delay within these cycles. In real-world traffic signal control systems, average vehicle delay is commonly used as a direct indicator of the level of service (LoS) of an intersection. We can thus calculate the rewards by weighting the cycle total flow x f,c t,l to the cycle total delay d l,c of lane l of signal cycle c, namely,\nr t = L l=1 Ct c=1 d l,c x f,c t,l L l=1 Ct c=1 x f,c t,l (30) Both x f,c\nt,l and d l,c in Eq. ( 30) can be estimated from the coarse-grained information using the proposed techniques in Sections and . Finally, these reward values will be normalized and used for offline RL policy training." }, { "figure_ref": [], "heading": "Signal Control Optimization via Offline RL", "publication_ref": [ "b61", "b28", "b61", "b17", "b23", "b61", "b21", "b61", "b17", "b23", "b61", "b29", "b61", "b43", "b43", "b52", "b52" ], "table_ref": [], "text": "With the inferred rewards, we now present the proposed sample-efficient offline RL method used in D2TSC. The coarse-grained and potentially limited real-world TSC data poses a great technical challenge for offline policy learning. To tackle this, we first introduce a phase pooling design to facilitate effective traffic demand information extraction under the time-varied phase orders (Section ). Then, we employ the SOTA in-sample learning offline RL algorithm to achieve stable and high-performing policy learning (Section ). In addition, we propose a data augmentation scheme (Section ) to improve the sample efficiency and the OODgeneralization ability of the learned TSC policy given limited available data.\nPhase information modeling For intersections that have |P| phases with K varied phase orders, it is challenging to accurately grasp the traffic demand for each phase solely from the limited coarse-grained data, as the same lane may belong to different phases in different time periods. This challenge imposes stringent requirements on data quantities and qualities for RL agents to learn good TSC policies. However, in practice, the available offline data are coarsegrained and typically have narrow state-action space coverage, which makes it hard to distill informative features for RL policy learning. Therefore, it is crucial to incorporate some domain knowledge to facilitate RL policy learning.\nIn this work, we design a phase pooling layer PPL(•) to facilitate the RL agent to better understand the intersectionwide traffic pattern. Specifically, for a specific phase order, we aggregate the traffic information of each phase from the coarse-grained features. In details, all K phase orders could be modeled in a metric Φ M ∈ R K×|P|×L that contains only binary values, where Φ M (k, p, l) = 1 represents that the l-th lane should display green at the p-th phase for the k-th phase order and Φ M (k, p, l) = 0 denotes red signal. In practice, the intersection will select a phase order k and follow the orders defined in Φ M to display green or red signals for each lane for a period of time and then switch to another phase order according to some human-defined rules. To model this, we also incorporate a one-hot embedding of the phase order ID Φ ID ∈ R 1×K to indicate the specific phase order that the intersection currently undergoes. For instance, Φ ID (k) = 1 means the intersection currently undergoes the k-th phase order. In this case, Φ ID Φ M ∈ R |P|×L can uniquely determine a phase pattern. Then, Φ ID Φ M (p, l) = 1 means that the current intersection has green signal at the p-th phase for the l-th lane and Φ ID Φ M (p, l) = 0 corresponds a red signal. The phase pooling layer is thus defined as:\nPPL(x f t , Φ ID , Φ M ) = x f t (Φ ID Φ M ) T ∈ R 1×|P| , (31\n) where the p-th item of x f t (Φ ID Φ M ) T represents the aggregated flow traffic demand information of the p-th phase (sum of the flow information for all the lanes that display green signal at the p-th phase). This provides more direct phasebased information to facilitate TSC RL policy learning, as a high traffic demand for a phase often suggests a larger green time ratio should be allocated for this phase. Finally, the augmented state features at step t include:\nŝt = x f t , x n t , Φ ID , PPL(x f t , Φ ID , Φ M ) ∈ R 1×(2L+K+|P|)(32)\nIn-sample offline RL algorithm In this paper, we consider the behavior regularized MDP framework in Eq. ( 4) for offline TSC policy learning and turn the optimization objectives in Eq. (5-6) to its in-sample learning form for its great training stability and SOTA performances. As formally studied in Xu et al. (2023), the behavior regularized MDP is closely related to a class of state-of-the-art (SOTA) insample learning offline RL methods (Kostrikov, Nair, and Levine 2021;Xu et al. 2023;Garg et al. 2023;Hansen-Estruch et al. 2023). Specifically, Eq. (5-6) can be easily transferred to its in-sample version by solving the KKT condition of Eq. (5-6) (Xu et al. 2023). In detail, different choices of the f -function in Eq. ( 4) will lead to different insample learning algorithms, but they all share the following general learning objectives:\nmin V E (s,a)∼D L f V (Q(s, a) -V (s)) , (33\n) min Q E (s,a,s ′ )∼D [r(s, a) + γV (s ′ ) -Q(s, a)] 2 , (34\n)\nmin π E (s,a)∼D L f π (Q(s, a) -V (s)) log π(a|s) , (35\n)\nwhere Q(s, a) and V (s) are the action-value function and state-value function, respectively; the forms of L f V (•) and L f π (•) depend on the specific choice of f -function. For instance, we can choose f (x) = log(x), where L f V (x) = exp (x/α) -x/α and L f π (x) = exp (x/α). This constructs a standard KL-divergence regularized MDP (Haarnoja et al. 2018) and is equivalent to an in-sample learning method, Exponential Q-Learning (EQL) (Xu et al. 2023;Garg et al. 2023;Hansen-Estruch et al. 2023). However, the exponential term in EQL typically faces numerical instability, which requires additional regularization tricks to stabilize the training. Therefore, we choose f (x) = x -1 in this paper instead, which is equivalent to another SOTA in-sample learning offline RL method, Sparse Q-Learning (SQL) (Xu et al. 2023), and solves a Neyman χ 2 -divergence regularized objective (Kumar et al. 2020), from which L f V (•) and L f π (•) in the general learning objectives in Eq. ( 33)-( 35) become:\nL f V (x) = I(1 + x/2α > 0)(1 + x/2α) 2 -x/α, (36) L f π (x) = I(1 + x/2α > 0)(1 + x/2α),(37)\nwhere I(•) is the indicator function. Observe in Eq. ( 36) that the state-value function (V (s)) learning objective seeks to regress only on high Q-values Q(s, a) where 1 + x/2α > 0, which can implicitly find the optimal state-value function covered by the offline dataset (Xu et al. 2023). Meanwhile, Eq. ( 37) shows that the policy behaves in a weighted regression manner (Nair et al. 2020) that only maximizes the likelihood on the regions that attains high values, thus producing optimized policy that leads to good outcomes.\nOptimizing the in-sample learning objectives in Eq. (33-35) enjoys several advantages compared to directly optimizing Eq. (5-6). First, note that minimizing the objectives in Eq. (33-35) does not need to explicitly learn a behavior policy µ to enforce policy regularization with respect to offline data, thus bypassing the notoriously difficult behavior estimation problem (Nair et al. 2020). Second, optimizing the in-sample learning objectives in Eq. (33-35) involves only samples from the offline dataset D, without any counterfactual reasoning on potential OOD policy-induced samples, thereby offering great learning stability as compared to other types of offline RL methods. On the contrary, the objectives in Eq. (5-6) are susceptible to overestimation errors at OOD regions. Specifically, Eq. ( 5) involves inferring action value function Q(s ′ , a ′ ) for actions a ′ generated by the policy π, which may not present in the offline dataset and can easily lead to erroneous value estimation. Considering these advantages, we choose to turn Eq. (5-6) to its corresponding in-sample learning offline RL methods to solve the behavior regularized MDP instead in this paper. This facilitates learning the optimal policy using only data that is seen in the offline dataset, avoiding the difficult behavior estimation and bypassing the potential over-estimation issue that arises from the counterfactual reasoning at OOD regions.\nImproving sample-efficient via data augmentation By optimizing Eq. (33-35), we can in principle acquire a good TSC policy given enough offline data with good coverage over the state-action space. However, in-sample learning may exhibit poor generalization capability in OOD regions during deployment due to the absence of supervision signal on OOD regions (Li et al. 2023b). Furthermore, real-world TSC datasets are typically small and have narrow coverage of the state-action space, which further exacerbates the challenge of OOD generalization.\nTo address this, we introduce a data augmentation scheme inspired by S4RL (Sinha, Mandlekar, and Garg 2022) that generates augmented data to mitigate the small dataset learning problem and enhance generalization in OOD regions. Specifically, during training of V and Q, small Gaussian noises ϵ are added to the states in the data batch (i.e., s := s + ϵ), leading to the following augmented training objective:\nmin V E (s,a)∼D,ϵ∼N (0,σ 2 ) L f V (Q(s, a) -V (s)) , min Q E (s,a,s ′ )∼D,ϵ∼N (0,σ 2 ) (r(s, a) + γV (s ′ ) -Q(s, a)) 2\nThis data augmentation can be perceived as smoothing the value function in its local ϵ-ball, which is beneficial to combat small perturbations and thus improves generalization on OOD regions. In addition, the augmented data can increase the dataset quantity and thus improve the overall sample efficiency of the offline RL algorithm (Sinha, Mandlekar, and Garg 2022)." }, { "figure_ref": [ "fig_5" ], "heading": "Experiments Experiment Settings", "publication_ref": [ "b38", "b12" ], "table_ref": [], "text": "Offline datasets and evaluation protocol. In this section, we evaluate our proposed D2TSC framework against several competitive baselines. Specifically, we collect 7 days (June 16-22, 2023) of historical traffic data from one test intersection in Zhuzhou, China, which is a complex 4-phase intersection with 17 lanes, as shown in Figure 6. We develop a highly customized simulation environment based on SUMO (Lopez et al. 2018) that strictly follows the realworld traffic characteristics observed in the collected 7 days of historical traffic data to provide comparable evaluations of all methods since it is infeasible to deploy some inferior baselines in real-world intersections for testing, which can be risky. To obtain the offline dataset for policy learning, we generate 100 days of data from the highly customized simulator using the behavior cloning policy trained on the 7 days' real datasets, as collecting a large amount of real-world data is quite costly. Note that the simulator is used only for generating the offline datasets and evaluating the policies, but is not accessible during the training process. Also, the generated data are coarse-grained to align with the realistic TSC data.\nD2TSC hyperparamters For the hyperparameters of D2TSC in our experiments, we use 2-layer MLPs to model the Q function, V function, and policy π in Eq. ( 33)-( 35). The learning rates for all networks are 3e -5 since the offline datasets are small, from which a large learning rate may lead to severe overfitting. We use the Adam optimizer to optimize the network parameters. We normalize all the rewards and state features in the offline dataset to N (5, 1) and N (0, 1); the actions are normalized to the range of [0, 1]. The regularization weight α in Eq. ( 4) is set to 0.01, since we observe that a large conservative strength may be overconservative and lead to unsatisfactory performances. For the data augmentation scale, the standard deviation of the added Gaussian noise is 0.01. We also clip the sampled noise to be smaller than 0.025 akin to TD3 and TD3+BC (Fujimoto, Hoof, and Meger 2018;Fujimoto and Gu 2021) to avoid unrealistic augmented data caused by the overly large added noise." }, { "figure_ref": [], "heading": "Baseline Methods", "publication_ref": [ "b47", "b28", "b29", "b59", "b40" ], "table_ref": [], "text": "In this paper, we consider the following baselines for comparison. For all baselines, we keep the network architecture, learning rates, and optimizer the same as D2TSC for a fair comparison. Furthermore, same as D2TSC, we use our inferred rewards to train all RL baselines, as there is no reward signal from the offline TSC dataset, although these methods themselves do not have the capability to learn from realworld reward-free TSC data.\n• Fixed plan is the fixed background timing plan used in the actual real-world test intersection. • Conventional TSC refers to a saturation-balance adaptive control method devised by domain experts and has been deployed in the test intersection for commercial usage. Therefore, this approach is quite high-performing and can be regarded as a near-expert policy. • Behavior cloning (BC) (Pomerleau 1988) is a widely used imitation learning method that directly mimics the offline data to learn the policy. • TD3+BC (Fujimoto and Gu 2021) is a well-known policy constraint offline RL method. We re-implement the official codes to be compatible with our simulation evaluation environment. We carefully tune its hyperparameters to ensure the best performance. • IQL (Kostrikov, Nair, and Levine 2021) is another SOTA in-sample learning offline RL method that utilizes expectile regression to optimize the value functions and extracts policy using advantage-weighted regression. We also reimplement the official codes and fine-tune its hyperparameters to be suitable to our TSC problem. • DataLight (Zhang and Deng 2023) is an offline TSC optimization method built upon the offline RL algorithm CQL (Kumar et al. 2020). We re-implement the official codes and use the original training parameters. We adapt its input states to be the same as our state features. We also modify their action space to be able to control the phase in our setting. • DemoLight (offline) (Xiong et al. 2019) is a recent TSC optimization method that utilizes BC pretrain to accelerate online RL training using A2C algorithm (Mnih et al. 2016). To be compatible with our offline setting, we consider an offline version of DemoLight that initializes the policy with expert demonstrations using BC, and then continues to run A2C on the fixed offline dataset without online interactions." }, { "figure_ref": [ "fig_4" ], "heading": "Reward Inference Evaluation", "publication_ref": [], "table_ref": [], "text": "The proposed reward inference approach is evaluated on our customized simulation environment configured by realworld traffic data. Vehicle positions and speeds are recorded from the simulator every second (regardless of the detection range). As a result, it is possible to compute the actual delay and queue length. Individual vehicle delay is defined as the cumulative difference between its speed and the free-flow speed over time. The lane delay is calculated by summing all individual vehicle delays occurring in the lane. The maximum queue length of a lane is determined as the farthest position of stopped vehicles (with speeds below 5.4 km/h) in the lane during a signal cycle, corresponding to the maximum range of the shockwave.\nIn our model, the Gaussian process hyperparameters are selected as h 0 = 0.5, λ = 2, and η = 1. The shockwave hyperparameters are L dr = 150 m, k j = 7.5 m/veh, and t S = 25s. The estimation of parameters θ using the M-H algorithm involves 1000 iterations per signal cycle for each lane. Only lanes associated with signal-controlled traffic movements (i.e., left-turn and straight, excluding right turn) are estimated for the maximum queue length and account for the total delay of the intersection.\nFigure 5 illustrates the estimation results of the 5-minute total delay for the test intersection. The estimated total delay closely aligns with the ground truth delay, exhibiting similar distributions. Note that under worse traffic conditions, certain lanes may experience extreme queuing that exceeds the detection range. This could lead to the underestimation of queuing conditions if directly adopting the observed spatial vehicle counts as queue lengths. In contrast, our proposed estimation approach performs very well at capturing the real delay, mainly due to the establishment of parametric formulations for the actual queuing process. The estimated queue lengths also match well with the ground truth during peak hours. These results demonstrate that our model can provide robust and reliable reward signals for TSC optimization." }, { "figure_ref": [], "heading": "Signal Control Optimization Results", "publication_ref": [ "b15" ], "table_ref": [ "tab_0", "tab_0", "tab_0", "tab_0", "tab_0", "tab_0", "tab_0" ], "text": "Given the inferred reward, we train our method and other baselines for 1M steps and report the total intersection delays and total queue lengths of the best models in Table 2. We also provide ablation studies to evaluate the effectiveness of our proposed phase pooling layer and data augmentation methods.\nEffectiveness of inferred rewards. The reward inference evaluation in Section 5.2 convincingly demonstrates that the estimated rewards can accurately reflect the ground-truth delay and queue length information. However, it is widely known that any small wrong reward signal may greatly mislead RL policy learning and cause undesired outcomes (Li et al. 2023a). Therefore, the subsequent evaluation of the optimized policies, grounded in the ground-truth metrics of delay and queue length, should be further considered to evaluate the effectiveness of inferred rewards.\nTo further examine the impact of the inferred rewards on the final performance of RL agents, we utilize the inferred rewards to train our D2TSC framework and other offline RL methods, including DataLight, TD3+BC, IQL, and Demo-Light (offline) and evaluate the optimized policies according to the ground-truth delay and queue length criterion. The results can be found in Table 2. Note that the underperformance of DemoLight is not attributed to the inaccuracy of the inferred rewards. Instead, it stems from the fact that De- This mismatch leads to substantial challenges, including severe distributional shift and an accumulation of overestimation errors (Fujimoto, Meger, and Precup 2019). By contrast, after addressing the inherent challenges in the offline RL setting, Table 2 shows that most offline RL methods including DataLight, TD3+BC, IQL, and D2TSC (ours) exhibit superior performance compared to the BC baseline. This outcome underscores that the inferred rewards do indeed serve as robust guiding signals for RL training, demonstrating the effectiveness of the inferred rewards. Moreover, these findings firstly demonstrate the possibility to extract reliable performance evaluation metrics and RL training rewards only from coarse-grained TSC data, paving the way for future research in more realistic TSC settings where fine-grained information are hard to access and only coarse-grained information are available.\nOptimization performance. Leveraging the inferred rewards, we conduct comprehensive comparisons among various methods, highlighting the superiority of the proposed D2TSC framework. Table 2 shows that our D2TSC framework consistently outperforms all baselines in reducing total intersection delays and total queue lengths. Impressively, it achieves top-tier performance across all seven days tested, as detailed in Table 2. While the degree of improvement over established baselines, such as the Conventional TSC approach, might initially appear modest, it's crucial to mention that the Conventional TSC approach is a kind of nearexpert policy. The Conventional TSC approach is carefullydesigned and well-tuned by domain experts, which has been deployed in the real-world intersections for commercial usage and is quite strong. For instance, see from Table 2 that the Conventional TSC approach has gained a great improvement over fixed timing plan. Consequently, the consistent outperformance of the D2TSC framework over this competitive approach underscores the significant advancements and practical effectiveness of D2TSC framework in traffic signal control.\nFor a more granular analysis, we visualize the evaluated delays and queue lengths within a whole day (19 Jun.) in Ablation on phase pooling layer and data augmentation. In addition, we conduct an ablation study to assess the contributions of our proposed Phase Pooling Layer (PPL) and data augmentation strategies in enhancing optimization outcomes. Table 2 shows that the performance of D2TSC noticeably dips when either the phase pooling layer (D2TSC w/o PPL) or the data augmentation (D2TSC w/o AG) is excluded. This underperformance confirm the effectiveness of the proposed phase pooling layer in effectively distilling traffic demand patterns, which in turn facilitates more informed and efficient RL training. Meanwhile, the results demonstrate the importance of data augmentation in boosting sample efficiency and out-of-distribution generalization within the realm of offline RL policy learning in the smalldata regime. These findings not only validate the individual significance of these components but also highlight their synergistic effect in elevating the overall efficacy of the D2TSC framework." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "In this paper, we introduce D2TSC, a fully data-driven RL framework for realistic traffic signal control. We develop a novel reward inference model by integrating wellestablished traffic flow theory with Gaussian process interpolation to estimate reward signals from coarse-grained traffic data. Utilizing the inferred rewards, we propose a sampleefficient offline RL method that directly learns optimized signal control policies from small real-world TSC datasets. The entire procedure of D2TSC can be accomplished in a fully offline manner with coarse-grained real-world TSC data, thereby addressing the key drawbacks of existing RLbased TSC methods that rely on traffic simulators and highquality TSC data. Our experiments demonstrate that D2TSC outperforms conventional and offline RL baselines, which offers a promising framework for deploying offline RL to solve realistic traffic signal control problems." }, { "figure_ref": [], "heading": "Acknowledgment", "publication_ref": [], "table_ref": [], "text": "This work is supported by funding from Baidu Inc. This work is also supported by the National Key Research and Development Program of China under Grant (2022YFB2502904)." } ]
The optimization of traffic signal control (TSC) is critical for an efficient transportation system. In recent years, reinforcement learning (RL) techniques have emerged as a popular approach for TSC and show promising results for highly adaptive control. However, existing RL-based methods suffer from notably poor real-world applicability and hardly have any successful deployments. The reasons for such failures are mostly due to the reliance on over-idealized traffic simulators for policy optimization, as well as using unrealistic fine-grained state observations and reward signals that are not directly obtainable from real-world sensors. In this paper, we propose a fully Data-Driven and simulator-free framework for realistic Traffic Signal Control (D2TSC). Specifically, we combine well-established traffic flow theory with machine learning to construct a reward inference model to infer the reward signals from coarse-grained traffic data. With the inferred rewards, we further propose a sample-efficient offline RL method to enable direct signal control policy learning from historical offline datasets of real-world intersections. To evaluate our approach, we collect historical traffic data from a real-world intersection, and develop a highly customized simulation environment that strictly follows real data characteristics. We demonstrate through extensive experiments that our approach achieves superior performance over conventional and offline RL baselines, and also enjoys much better realworld applicability.
A Fully Data-Driven Approach for Realistic Traffic Signal Control Using Offline Reinforcement Learning
[ { "figure_caption": "Figure1: The proposed D2TSC framework: a fully data-driven approach for realistic traffic signal control using offline RL.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Illustration of queuing process on a lane. (a) Arriving vehicles queue in front of the stopline, forming two shockwaves that propagate backward with speeds w1 and w2. Given an initial queue length q0, the maximum queue length qmax is formed where the two shockwaves meet. However, in real TSC data, we can only observe the number of vehicles within the detection range, causing the queue possibly be partially observed. (b) Cumulative arrival (At) and departure (Dt) curves of vehicles within a signal cycle. Assuming arrival and saturated departure flow rates vn and vs remain constant within a signal cycle, based on flow conservation and the impact of shockwaves, the detected cumulative arrival curve Ãd t lower-bounds the actual cumulative arrival curve. This can be used to construct a theoretical spatial vehicle count curve ξt(θ) based on a set of traffic flow parameters (i.e., (vn, vs, ξ0)) and the signal timing information (Tc, Tg, Tr). (c) A Gaussian process interpolation model can be constructed to learn θ by fitting ξt(θ) to the empirically observed spatial vehicle counts x n t and traffic flows x f t .", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: General queuing process in multiple signal cycles.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Fundamental diagram of traffic flow.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Reward estimation results. (a) 5-minute total delay of the intersection. Colorbars represent the data frequency. (b) A case study of cycle-level lane maximum queue length estimation during peak hours.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Evaluated delays and queue lengths (19 Jun.) on the test intersection", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 6 ,6Figure 6, drawing the comparisons with the Conventional TSC method and the SOTA offline RL baseline IQL. Figure 6 clearly highlights that D2TSC can more effectively alleviate traffic congestion throughout the entire day as compared to the baselines, enjoying less delay time and queue length, demonstrating the superior performance of D2TSC.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Optimization results of the test intersection. We report the mean and standard deviation across 5 random seeds.", "figure_data": "Total intersection delay (the smaller the better)Method16 Jun.17 Jun.18 Jun.19 Jun.20 Jun.21 Jun.22 Jun.MeanFixed plan59536266612061946396638364806256Conventional TSC45834646462845914618470145154612BC4841 ± 274738 ± 284777 ± 194629 ± 374616 ± 424131 ± 484876 ± 394659 ± 47TD3+BC4522 ± 124714 ± 384588 ± 224579 ± 404634 ± 374633 ± 514551 ± 274603 ± 5IQL4543 ± 364675 ± 424552 ± 244550 ± 464622 ± 294616 ± 464509 ± 524581 ± 9DataLight4542 ± 324679 ± 394558 ± 484549 ± 454628 ± 814614 ± 484507 ± 384582 ± 57DemoLight (offline)8851 ± 8110327 ± 846118 ± 8811073 ± 956138 ± 937480 ± 928872 ± 918408 ± 101D2TSC w/o PPL4533 ± 844660 ± 994580 ± 1314540 ± 1254585 ± 1184611 ± 994522 ± 1214659 ± 45D2TSC w/o AG4518 ± 304659 ± 454565 ± 494524 ± 334617 ± 574604 ± 444490 ± 434568 ± 12D2TSC (Ours)4513 ± 434605 ± 454505 ± 454479 ± 804535 ± 684580 ± 554463 ± 524526 ± 37Total queue Length (the smaller the better)Method16 Jun.17 Jun.18 Jun.19 Jun.20 Jun.21 Jun.22 Jun.MeanFixed plan52956339599460787090669667696323Conventional TSC35583931372436273844368433843679BC3528 ± 583805 ± 633553 ± 763931 ± 543808 ± 583552 ± 573427 ± 323658 ± 89TD3+BC3392 ± 673922 ± 763636 ± 623603 ± 383817 ± 623591 ± 623430 ± 353627 ± 28IQL3458 ± 723881 ± 473594 ± 493533 ± 413767 ± 613608 ± 613428 ± 703610 ± 24DataLight3459 ± 693878 ± 583601 ± 763558 ± 773784 ± 833601 ± 493421 ± 753614 ± 45DemoLight (offline)6174 ± 915589 ± 1218297 ± 5984400 ± 819823 ± 8836532 ± 1936754 ± 3916795 ± 231D2TSC w/o PPL3311 ± 1143851 ± 963564 ± 1423643 ± 1303743 ± 1563522 ± 1043410 ± 1593552 ± 122D2TSC w/o AG3327 ± 563843 ± 743550 ± 433493 ± 523772 ± 503548 ± 183386 ± 803560 ± 34D2TSC (Ours)3309 ± 763789 ± 583467 ± 333427 ± 753685 ± 613513 ± 573372 ± 743509 ± 45", "figure_id": "tab_0", "figure_label": "2", "figure_type": "table" } ]
Jianxiong Li; Shichao Lin; Tianyu Shi; Chujie Tian; Yu Mei; Jian Song; Xianyuan Zhan; Ruimin Li
[ { "authors": "B Abdulhai; R Pringle; G J Karakoulas", "journal": "Journal of Transportation Engineering", "ref_id": "b0", "title": "Reinforcement learning for true adaptive traffic signal control", "year": "2003" }, { "authors": "G An; S Moon; J.-H Kim; H O Song", "journal": "Advances in neural information processing systems", "ref_id": "b1", "title": "Uncertainty-based offline reinforcement learning with diversified q-ensemble", "year": "2021" }, { "authors": "C Bai; L Wang; Z Yang; Z.-H Deng; A Garg; P Liu; Z Wang", "journal": "", "ref_id": "b2", "title": "Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning", "year": "2021" }, { "authors": "J O Berger", "journal": "Springer Science & Business Media", "ref_id": "b3", "title": "Statistical decision theory and Bayesian analysis", "year": "2013" }, { "authors": "C M Bishop; N M Nasrabadi", "journal": "Springer", "ref_id": "b4", "title": "Pattern recognition and machine learning", "year": "2006" }, { "authors": "C Cai; C K Wong; B G Heydecker", "journal": "Transportation Research Part C: Emerging Technologies", "ref_id": "b5", "title": "Adaptive traffic signal control using approximate dynamic programming", "year": "2009" }, { "authors": "C Chen; H Wei; N Xu; G Zheng; M Yang; Y Xiong; K Xu; Z Li", "journal": "", "ref_id": "b6", "title": "Toward a thousand lights: Decentralized deep reinforcement learning for large-scale traffic signal control", "year": "2020" }, { "authors": "T Chu; J Wang; L Codecà; Z Li", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b7", "title": "Multiagent deep reinforcement learning for large-scale traffic signal control", "year": "2019" }, { "authors": "S.-B Cools; C Gershenson; B Hooghe", "journal": "", "ref_id": "b8", "title": "Selforganizing traffic lights: A realistic simulation", "year": "2013" }, { "authors": "C F Daganzo", "journal": "Emerald Group Publishing Limited", "ref_id": "b9", "title": "Fundamentals of transportation and traffic operations", "year": "1997" }, { "authors": "F.-X Devailly; D Larocque; L Charlin", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b10", "title": "IG-RL: Inductive graph reinforcement learning for massivescale traffic signal control", "year": "2021" }, { "authors": "J Fu; A Kumar; O Nachum; G Tucker; S Levine", "journal": "", "ref_id": "b11", "title": "D4rl: Datasets for deep data-driven reinforcement learning", "year": "2020" }, { "authors": "S Fujimoto; S S Gu", "journal": "Advances in neural information processing systems", "ref_id": "b12", "title": "A minimalist approach to offline reinforcement learning", "year": "2021" }, { "authors": "S Fujimoto; H Hoof; D Meger", "journal": "", "ref_id": "b13", "title": "Addressing function approximation error in actor-critic methods", "year": "2018" }, { "authors": " Pmlr", "journal": "", "ref_id": "b14", "title": "", "year": "" }, { "authors": "S Fujimoto; D Meger; D Precup", "journal": "", "ref_id": "b15", "title": "Off-policy deep reinforcement learning without exploration", "year": "2019" }, { "authors": " Pmlr", "journal": "", "ref_id": "b16", "title": "", "year": "" }, { "authors": "D Garg; J Hejna; M Geist; S Ermon", "journal": "", "ref_id": "b17", "title": "Extreme Q-Learning: MaxEnt RL without Entropy", "year": "2023" }, { "authors": "N H Gartner; S F Assman; F Lasaga; D L Hou", "journal": "Transportation Research Part B: Methodological", "ref_id": "b18", "title": "A multi-band approach to arterial traffic signal optimization", "year": "1991" }, { "authors": "T Haarnoja; H Tang; P Abbeel; S Levine", "journal": "", "ref_id": "b19", "title": "Reinforcement learning with deep energy-based policies", "year": "2017" }, { "authors": " Pmlr", "journal": "", "ref_id": "b20", "title": "", "year": "" }, { "authors": "T Haarnoja; A Zhou; P Abbeel; S Levine", "journal": "", "ref_id": "b21", "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "year": "2018" }, { "authors": " Pmlr", "journal": "", "ref_id": "b22", "title": "", "year": "" }, { "authors": "P Hansen-Estruch; I Kostrikov; M Janner; J G Kuba; S Levine", "journal": "", "ref_id": "b23", "title": "Idql: Implicit q-learning as an actor-critic method with diffusion policies", "year": "2023" }, { "authors": "P Hunt; D Robertson; R Bretherton; M C Royle", "journal": "Traffic Engineering & Control", "ref_id": "b24", "title": "The SCOOT on-line traffic signal optimisation technique", "year": "1982" }, { "authors": "S Iqbal; F Sha", "journal": "", "ref_id": "b25", "title": "Actor-attention-critic for multiagent reinforcement learning", "year": "2019" }, { "authors": "W.-L Jin", "journal": "Transportation Research Part B: Methodological", "ref_id": "b26", "title": "Point queue models: A unified approach", "year": "2015" }, { "authors": "B R Kiran; I Sobh; V Talpaert; P Mannion; A A Al Sallab; S Yogamani; P Pérez", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b27", "title": "Deep reinforcement learning for autonomous driving: A survey", "year": "2021" }, { "authors": "I Kostrikov; A Nair; S Levine", "journal": "", "ref_id": "b28", "title": "Offline Reinforcement Learning with Implicit Q-Learning", "year": "2021" }, { "authors": "A Kumar; A Zhou; G Tucker; S Levine", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b29", "title": "Conservative q-learning for offline reinforcement learning", "year": "2020" }, { "authors": "M Kunjir; S Chawla", "journal": "", "ref_id": "b30", "title": "Offline Reinforcement Learning for Road Traffic Control", "year": "2022" }, { "authors": "S Levine; A Kumar; G Tucker; J Fu", "journal": "", "ref_id": "b31", "title": "Offline reinforcement learning: Tutorial, review, and perspectives on open problems", "year": "2020" }, { "authors": "J Li; X Hu; H Xu; J Liu; X Zhan; Q.-S Jia; Y.-Q Zhang", "journal": "", "ref_id": "b32", "title": "Mind the Gap: Offline Policy Optimization for Imperfect Rewards", "year": "2023" }, { "authors": "J Li; X Zhan; H Xu; X Zhu; J Liu; Y.-Q Zhang", "journal": "", "ref_id": "b33", "title": "When Data Geometry Meets Deep Function: Generalizing Offline Reinforcement Learning", "year": "2023" }, { "authors": "L Li; Y Lv; F.-Y Wang", "journal": "IEEE/CAA Journal of Automatica Sinica", "ref_id": "b34", "title": "Traffic signal timing via deep reinforcement learning", "year": "2016" }, { "authors": "X Liang; X Du; G Wang; Z Han", "journal": "IEEE Transactions on Vehicular Technology", "ref_id": "b35", "title": "A deep reinforcement learning network for traffic light cycle control", "year": "2019" }, { "authors": "M J Lighthill; G B Whitham", "journal": "Proceedings of the royal society of london. series a. mathematical and physical sciences", "ref_id": "b36", "title": "On kinematic waves II. A theory of traffic flow on long crowded roads", "year": "1178" }, { "authors": "T P Lillicrap; J J Hunt; A Pritzel; N Heess; T Erez; Y Tassa; D Silver; D Wierstra", "journal": "", "ref_id": "b37", "title": "Continuous control with deep reinforcement learning", "year": "2016" }, { "authors": "P A Lopez; M Behrisch; L Bieker-Walz; J Erdmann; Y.-P Flötteröd; R Hilbrich; L Lücken; J Rummel; P Wagner; E Wiebner", "journal": "", "ref_id": "b38", "title": "Microscopic traffic simulation using sumo", "year": "2018" }, { "authors": "P Lowrie; K Lu; X Tian; S Jiang; Y Lin; W Zhang", "journal": "", "ref_id": "b39", "title": "Scats, sydney co-ordinated adaptive traffic system: A traffic responsive method of controlling urban traffic", "year": "1990" }, { "authors": "V Mnih; A P Badia; M Mirza; A Graves; T Lillicrap; T Harley; D Silver; K Kavukcuoglu", "journal": "", "ref_id": "b40", "title": "Asynchronous methods for deep reinforcement learning", "year": "1928" }, { "authors": " Pmlr", "journal": "", "ref_id": "b41", "title": "", "year": "" }, { "authors": "Z Mo; W Li; Y Fu; K Ruan; X Di", "journal": "Transportation research part C: emerging technologies", "ref_id": "b42", "title": "CV-Light: Decentralized learning for adaptive traffic signal control with connected vehicles", "year": "2022" }, { "authors": "A Nair; A Gupta; M Dalal; S Levine", "journal": "", "ref_id": "b43", "title": "Awac: Accelerating online reinforcement learning with offline datasets", "year": "2020" }, { "authors": "G F Newell", "journal": "Transportation Research Part B: Methodological", "ref_id": "b44", "title": "A simplified theory of kinematic waves in highway traffic, part II: Queueing at freeway bottlenecks", "year": "1993" }, { "authors": "H Niu; S Sharma; Y Qiu; M Li; G Zhou; H Jianming; X Zhan", "journal": "", "ref_id": "b45", "title": "When to Trust Your Simulator: Dynamics-Aware Hybrid Offline-and-Online Reinforcement Learning", "year": "2022" }, { "authors": "A Oroojlooy; M Nazari; D Hajinezhad; J Silva", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b46", "title": "Attendlight: Universal attention-based reinforcement learning model for traffic signal control", "year": "2020" }, { "authors": "D A Pomerleau", "journal": "Advances in neural information processing systems", "ref_id": "b47", "title": "Alvinn: An autonomous land vehicle in a neural network", "year": "1988" }, { "authors": "P I Richards", "journal": "Operations research", "ref_id": "b48", "title": "Shock waves on the highway", "year": "1956" }, { "authors": "F Rodrigues; C L Azevedo", "journal": "IEEE", "ref_id": "b49", "title": "Towards Robust Deep Reinforcement Learning for Traffic Signal Control: Demand Surges, Incidents and Sensor Failures", "year": "2019" }, { "authors": "S M A Shabestary; B Abdulhai", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b50", "title": "Adaptive Traffic Signal Control With Deep Reinforcement Learning and High Dimensional Sensory Inputs: Case Study and Comprehensive Sensitivity Analyses", "year": "2022" }, { "authors": "A G Sims; K W Dobinson", "journal": "IEEE Transactions on vehicular technology", "ref_id": "b51", "title": "The Sydney coordinated adaptive traffic (SCAT) system philosophy and benefits", "year": "1980" }, { "authors": "S Sinha; A Mandlekar; A Garg", "journal": "PMLR", "ref_id": "b52", "title": "S4rl: Surprisingly simple self-supervision for offline reinforcement learning in robotics", "year": "2022" }, { "authors": "E Van Der Pol; F A Oliehoek", "journal": "", "ref_id": "b53", "title": "Coordinated deep reinforcement learners for traffic light control", "year": "2016" }, { "authors": "F Webster", "journal": "", "ref_id": "b54", "title": "Traffic Signal Settings", "year": "1958" }, { "authors": "H Wei; N Xu; H Zhang; G Zheng; X Zang; C Chen; W Zhang; Y Zhu; K Xu; Z Li", "journal": "", "ref_id": "b55", "title": "Colight: Learning network-level cooperation for traffic signal control", "year": "1913" }, { "authors": "H Wei; G Zheng; V Gayah; Z Li", "journal": "ACM SIGKDD Explorations Newsletter", "ref_id": "b56", "title": "Recent advances in reinforcement learning for traffic signal control: A survey of models and evaluation", "year": "2021" }, { "authors": "H Wei; G Zheng; H Yao; Z Li", "journal": "", "ref_id": "b57", "title": "Intellilight: A reinforcement learning approach for intelligent traffic light control", "year": "2018" }, { "authors": "M Wiering; J V Veenen; J Vreeken; A Koopman", "journal": "", "ref_id": "b58", "title": "Intelligent traffic light control", "year": "2004" }, { "authors": "Y Xiong; G Zheng; K Xu; Z Li", "journal": "", "ref_id": "b59", "title": "Learning traffic signal control from demonstrations", "year": "2019" }, { "authors": "H Xu; L Jiang; L Jianxiong; X Zhan", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b60", "title": "A policy-guided imitation approach for offline reinforcement learning", "year": "2022" }, { "authors": "H Xu; L Jiang; J Li; Z Yang; Z Wang; V W K Chan; X Zhan", "journal": "", "ref_id": "b61", "title": "Offline RL with No OOD Actions: In-Sample Learning via Implicit Value Regularization", "year": "2023" }, { "authors": "H Xu; X Zhan; J Li; H Yin", "journal": "", "ref_id": "b62", "title": "Offline reinforcement learning with soft behavior regularization", "year": "2021" }, { "authors": "H Yan; F He; X Lin; J Yu; M Li; Y Wang", "journal": "Transportation Research Part C: Emerging Technologies", "ref_id": "b63", "title": "Network-level multiband signal coordination scheme based on vehicle trajectory data", "year": "2019" }, { "authors": "J Yoon; K Ahn; J Park; H Yeo", "journal": "Transportation Research Part C: Emerging Technologies", "ref_id": "b64", "title": "Transferable traffic signal control: Reinforcement learning with graph centric state representation", "year": "2021" }, { "authors": "X Zang; H Yao; G Zheng; N Xu; K Xu; Z Li", "journal": "", "ref_id": "b65", "title": "MetaLight: Value-Based Meta-Reinforcement Learning for Traffic Signal Control", "year": "2020" }, { "authors": "X Zhan; R Li; S V Ukkusuri", "journal": "Transportation Research Part C: Emerging Technologies", "ref_id": "b66", "title": "Link-based traffic state estimation and prediction for arterial networks using recognition data", "year": "2020" }, { "authors": "X Zhan; H Xu; Y Zhang; X Zhu; H Yin; Y Zheng", "journal": "", "ref_id": "b67", "title": "DeepThermal: Combustion Optimization for Thermal Power Generating Units Using Offline Reinforcement Learning", "year": "2022" }, { "authors": "H Zhang; S Feng; C Liu; Y Ding; Y Zhu; Z Zhou; W Zhang; Y Yu; H Jin; Z Li", "journal": "", "ref_id": "b68", "title": "Cityflow: A multi-agent reinforcement learning environment for large scale city traffic scenario", "year": "2019" }, { "authors": "L Zhang; J Deng", "journal": "", "ref_id": "b69", "title": "Data Might be Enough: Bridge Real-World Traffic Signal Control Using Offline Reinforcement Learning", "year": "2023" }, { "authors": "G Zheng; Y Xiong; X Zang; J Feng; H Wei; H Zhang; Y Li; K Xu; Z Li", "journal": "", "ref_id": "b70", "title": "Learning phase competition for traffic signal control", "year": "2019" }, { "authors": "G Zheng; X Zang; N Xu; H Wei; Z Yu; V Gayah; K Xu; Z Li", "journal": "", "ref_id": "b71", "title": "Diagnosing reinforcement learning for traffic signal control", "year": "2019" } ]
[ { "formula_coordinates": [ 3, 62.29, 383.94, 230.21, 30.2 ], "formula_id": "formula_0", "formula_text": "max π E ∞ t=0 γ t r(s t , a t )|s 0 ∼ ρ, a t ∼ π, s t+1 ∼ P . (1)" }, { "formula_coordinates": [ 3, 54, 438.7, 238.5, 36.03 ], "formula_id": "formula_1", "formula_text": "V π (s) := E [ ∞ t=0 γ t r(s t , a t )|s 0 = s, a t ∼ π, s t+1 ∼ P] or an action-value function Q π (s, a) := E[ ∞ t=0 γ t r(s t , a t )| s 0 = s, a 0 = 0, a t+1 ∼ π, s t+1 ∼ P]." }, { "formula_coordinates": [ 3, 99.36, 532.1, 193.14, 33.4 ], "formula_id": "formula_2", "formula_text": "Q π = arg min Q E (s,a,r,s ′ )∼D [(r(s, a) +γE a ′ ∼π(•|s ′ ) Q(s ′ , a ′ ) -Q(s, a)) 2 ],(2)" }, { "formula_coordinates": [ 3, 111.6, 640.51, 123.31, 16.21 ], "formula_id": "formula_3", "formula_text": "max π E s∼D,a∼π(•|s) [Q π (s, a)] ." }, { "formula_coordinates": [ 3, 334.54, 373.05, 223.46, 30.2 ], "formula_id": "formula_4", "formula_text": "max π E ∞ t=0 γ t r(s t , a t ) -αf π(a t |s t ) µ(a t |s t ) ,(4)" }, { "formula_coordinates": [ 3, 328.28, 546.55, 229.72, 67.18 ], "formula_id": "formula_5", "formula_text": "min Q E (s,a,s ′ )∼D r(s, a) + γE a ′ ∼π(•|s ′ ) Q(s ′ , a ′ ) -αf π(a ′ |s ′ ) µ(a ′ |s ′ ) -Q(s, a) 2 ,(5)" }, { "formula_coordinates": [ 3, 349.8, 631.38, 208.2, 22.31 ], "formula_id": "formula_6", "formula_text": "max π E s∼D,a∼π Q(s, a) -αf π(a|s) µ(a|s) .(6)" }, { "formula_coordinates": [ 4, 323.65, 157.55, 234.35, 44.49 ], "formula_id": "formula_7", "formula_text": "x f t = x f t,1 , • • • , x f t,L ∈ R 1×L and spatial vehicle counts x n t = x n t,1 , • • • , x n t,L ∈ R 1×L" }, { "formula_coordinates": [ 4, 390.56, 276.37, 167.44, 13.37 ], "formula_id": "formula_8", "formula_text": "s t = [x f t , x n t ] ∈ R 1×2L . (7)" }, { "formula_coordinates": [ 4, 330.02, 315.84, 227.98, 29.33 ], "formula_id": "formula_9", "formula_text": "T g = [T 1 g , T 2 g , ..., T |P|-1 g , 1 - |P|-1 p=1 T p g ] ∈ R 1×|P|" }, { "formula_coordinates": [ 4, 382.27, 410.25, 84.72, 11.72 ], "formula_id": "formula_10", "formula_text": "a t = [T c , T g ] ∈ R 1×(" }, { "formula_coordinates": [ 5, 61.82, 428.11, 230.68, 81.4 ], "formula_id": "formula_11", "formula_text": "A t = ξ 0 + v n t 0 ≤ t ≤ T c (9) D t =        0 0 ≤ t ≤ T r v s (t -T r ) T r < t ≤ T r + τ v s τ + v n (t -T r -τ ) T r + τ < t ≤ T c x f t = T c(" }, { "formula_coordinates": [ 5, 158.02, 640.18, 134.48, 13.14 ], "formula_id": "formula_12", "formula_text": "Ãd t ≤ A t (11)" }, { "formula_coordinates": [ 5, 397.56, 327.67, 156.3, 13.14 ], "formula_id": "formula_13", "formula_text": "Ãd t ≤ D t-t S + L dr k j (12" }, { "formula_coordinates": [ 5, 553.85, 330.51, 4.15, 8.64 ], "formula_id": "formula_14", "formula_text": ")" }, { "formula_coordinates": [ 5, 376.57, 530.95, 181.43, 13.14 ], "formula_id": "formula_15", "formula_text": "Ãd t ≈ min{A t , D t-t S + L dr k j }(13)" }, { "formula_coordinates": [ 5, 319.5, 575.71, 238.5, 128.44 ], "formula_id": "formula_16", "formula_text": "ξ t := Ãd t -D t ≈ min{A t -D t , D t-t S -D t + L dr k j } = min{ξ (1) t , ξ (2) t + L dr k j } (14) where ξ (1) t = ξ 0 +      v n t, 0 ≤ t ≤ T r -v s T r + (v n -v s )t, T r < t ≤ T r + τ v n T c -x f , T r + τ < t ≤ T c (15)" }, { "formula_coordinates": [ 6, 54, 293.54, 238.5, 115.27 ], "formula_id": "formula_17", "formula_text": "(2) t =                              0, 0 ≤ t ≤ Tr -vs(t -Tr), Tr < t ≤ Tr + t S , t S < τ -vst S , Tr + t S < t ≤ Tr + τ, t S < τ -vs(t -Tr), Tr < t ≤ Tr + τ, t S ≥ τ vnTc -vs(t S + Tr) -x f + (vs -vn)t, Tr + τ < t ≤ Tr + τ + t S , t S < Tg -τ -vnt S , Tr + τ + t S < t ≤ Tc, t S < Tg -τ -x f + vnTc -vnt, Tr + τ < t ≤ Tr + t S , t S ≥ Tg -τ -vs(t S + Tr) + vnTc -x f + (vs -vn)t Tr + t S < t ≤ Tc, t S ≥ Tg -τ (16) and τ = x f -v n T g v s -v n ∈ [0, T g ](17)" }, { "formula_coordinates": [ 6, 319.5, 294.6, 76.83, 12.19 ], "formula_id": "formula_18", "formula_text": "v c n , x f,c , t c r , T c r , T c" }, { "formula_coordinates": [ 6, 336.18, 332.76, 221.82, 43.53 ], "formula_id": "formula_19", "formula_text": "v c n max t c r ≤t<t c+1 r (x n t ) = v c ′ n max t c ′ r ≤t<t c ′ +1 r (x n t ) = ζ, ∀c, c ′ ∈ {1, • • • , C}(18)" }, { "formula_coordinates": [ 6, 333.87, 421.58, 224.13, 51.37 ], "formula_id": "formula_20", "formula_text": "x f = C i=1 x f,c + x nr (19) x f,c = x n t c r + v c n T c c -x n t c+1 r , ∀c ∈ {1, • • • , C}(20)" }, { "formula_coordinates": [ 6, 397.03, 693.12, 160.97, 12.69 ], "formula_id": "formula_21", "formula_text": "x n t (θ) = ξ t (θ) + ηε,(21)" }, { "formula_coordinates": [ 7, 105.79, 262.48, 186.71, 9.68 ], "formula_id": "formula_22", "formula_text": "GP (t|θ) ∼ N (ξ t (θ), Ker(t, t|η))(22)" }, { "formula_coordinates": [ 7, 69.44, 307.63, 223.06, 40.56 ], "formula_id": "formula_23", "formula_text": "Ker(t, t|η) =    ϕ(t 1 , t 1 |η) . . . ϕ(t 1 , t n |η) . . . . . . . . . ϕ(t n , t 1 |η) . . . ϕ(t n , t n |η)   (23)" }, { "formula_coordinates": [ 7, 90.52, 416.89, 201.98, 22.78 ], "formula_id": "formula_24", "formula_text": "ϕ(t i , t j |η) = h 0 e - t i -t j λ 2 + η 2 δ(t i , t j )(24)" }, { "formula_coordinates": [ 7, 391.3, 489.17, 166.7, 23.23 ], "formula_id": "formula_25", "formula_text": "q max = w 2 (q 0 + w 1 T r ) w 2 -w 1 (25)" }, { "formula_coordinates": [ 7, 370.13, 635.91, 187.87, 9.65 ], "formula_id": "formula_26", "formula_text": "v = G(k) = min{uk, w(k j -k)}(26)" }, { "formula_coordinates": [ 8, 76.64, 75.2, 215.86, 50.05 ], "formula_id": "formula_27", "formula_text": "w 1 = k AC = v n k n -k j = 1 1/u -k j /v n = - 1 k j (1/v n -1/v s ) + t S /L dr (27)" }, { "formula_coordinates": [ 8, 137.47, 176.05, 155.03, 9.65 ], "formula_id": "formula_28", "formula_text": "w 2 = k BC = -w(28)" }, { "formula_coordinates": [ 8, 104.95, 292.03, 187.55, 23.22 ], "formula_id": "formula_29", "formula_text": "d = 1 2 T r q max + q 0 (w 2 T r + q 0 ) w 2 -w 1 (29)" }, { "formula_coordinates": [ 8, 54, 448.98, 238.5, 50.06 ], "formula_id": "formula_30", "formula_text": "r t = L l=1 Ct c=1 d l,c x f,c t,l L l=1 Ct c=1 x f,c t,l (30) Both x f,c" }, { "formula_coordinates": [ 8, 333.79, 436.45, 220.06, 13.36 ], "formula_id": "formula_31", "formula_text": "PPL(x f t , Φ ID , Φ M ) = x f t (Φ ID Φ M ) T ∈ R 1×|P| , (31" }, { "formula_coordinates": [ 8, 320.26, 551.95, 237.74, 25.97 ], "formula_id": "formula_32", "formula_text": "ŝt = x f t , x n t , Φ ID , PPL(x f t , Φ ID , Φ M ) ∈ R 1×(2L+K+|P|)(32)" }, { "formula_coordinates": [ 9, 68.57, 107.58, 219.78, 17.33 ], "formula_id": "formula_33", "formula_text": "min V E (s,a)∼D L f V (Q(s, a) -V (s)) , (33" }, { "formula_coordinates": [ 9, 68.57, 110.65, 223.93, 35.69 ], "formula_id": "formula_34", "formula_text": ") min Q E (s,a,s ′ )∼D [r(s, a) + γV (s ′ ) -Q(s, a)] 2 , (34" }, { "formula_coordinates": [ 9, 288.35, 132.07, 4.15, 8.64 ], "formula_id": "formula_35", "formula_text": ")" }, { "formula_coordinates": [ 9, 68.57, 151.4, 219.78, 16.21 ], "formula_id": "formula_36", "formula_text": "min π E (s,a)∼D L f π (Q(s, a) -V (s)) log π(a|s) , (35" }, { "formula_coordinates": [ 9, 288.35, 153.79, 4.15, 8.64 ], "formula_id": "formula_37", "formula_text": ")" }, { "formula_coordinates": [ 9, 68.4, 374.65, 224.1, 29.6 ], "formula_id": "formula_38", "formula_text": "L f V (x) = I(1 + x/2α > 0)(1 + x/2α) 2 -x/α, (36) L f π (x) = I(1 + x/2α > 0)(1 + x/2α),(37)" }, { "formula_coordinates": [ 9, 321.16, 336.71, 232.61, 40.19 ], "formula_id": "formula_39", "formula_text": "min V E (s,a)∼D,ϵ∼N (0,σ 2 ) L f V (Q(s, a) -V (s)) , min Q E (s,a,s ′ )∼D,ϵ∼N (0,σ 2 ) (r(s, a) + γV (s ′ ) -Q(s, a)) 2" } ]
10.1145/nnnnnnn.nnnnnnn
2023-11-27
[ { "figure_ref": [], "heading": " ", "publication_ref": [ "b11", "b12", "b14", "b16", "b17", "b23", "b25", "b31", "b33", "b44", "b45", "b53", "b58", "b42", "b43", "b22", "b41", "b9", "b24", "b40", "b52", "b56", "b23", "b5", "b19", "b52", "b2", "b46", "b28", "b13", "b19", "b20", "b48", "b0", "b27", "b7", "b38", "b0", "b27", "b7", "b23", "b49", "b26", "b33", "b3", "b5", "b4", "b60", "b57", "b42", "b6", "b47", "b42", "b60", "b42", "b6", "b47", "b33", "b34" ], "table_ref": [], "text": "12,\n13,\n15,\n17,\n18,\n24,\n26,\n32,\n34,\n45,\n46,\n54,\n59\n]. However, most NeurIR methods are re-ranking methods, which must rely on a first-stage conventional retriever such as BM25 [43] to obtain a manageable set of relevant document candidates. They re-rank this set because for NeurIR methods to rank the entire corpus is computationally prohibitive.\nTo resolve the efficiency issue, representation-based NeurIR approaches have been explored for they can pre-process document collections offline and offload much of the computation burden. Representation-based methods encode a query or a document into a single fixed-size vector. Each query and each document has its own embedding vector, trained in a way that the embeddings can be used in dot product to measure query-document similarity. This setup allows pre-computation of document representations of the entire collection at offline time. Compared to a conventional retrieval system that consists of an indexing phase and a retrieval phase [44], representation-based approaches' learning and storing the document embeddings can be thought of the \"indexing\" phase; and calculating and sorting the dot products is the \"retrieval\" phase. It can be shown in the following pipeline (Here 𝐸 (𝑞) and 𝐸 (𝑑 𝑖 ) are embeddings for query 𝑞 and document 𝑑 𝑖 , respectively):\nLearn 𝐸 (𝑑 𝑖 ) → Store 𝐸 (𝑑 𝑖 ) in index, ∀𝑖 ↘ ∀𝑖, 𝐸 (𝑞) • 𝐸 (𝑑 𝑖 ) → Top-k results\nLearn 𝐸 (𝑞) → 𝐸 (𝑞) ↗ by MIPS, ANN.\n(\nThe embedding vectors can be either dense or sparse. Dense vectors are usually short, with nearly all non-zero entries. Sparse vectors, on the contrary, can be long and with many zero-valued entries. Dense, representation-based retrievers (e.g., DPR [23], SBERT [42], Condenser [10], ICT [25], RocketQA [41], ANCE [53], RepBERT [57], and ColBERT [24]) have gained much attention recently. They study pre-training or fine-tuning (mostly fine-tuned from the BERT embeddings [6]) methods to obtain dense low-dimensional encodings for queries and documents. Unlike results obtained from the sparse bag-of-words (BoWs) representations, top-K results obtained from these dense representations cannot be efficiently found without any approximation. Instead, they must be assisted with efficient search algorithms for approximate nearest neighbor search (ANN) [20,53] or maximum inner-product search (MIPS) [3]. Popular efficient search algorithms leverage ideas such as hashing (e.g. LSH [47]), clustering [29], product quantization (e.g. ScaNN [14], FAISS [20]) and dimension reduction (e.g. PCA [21], t-SNE [49]), equipped with carefully chosen data structures.\nSparse, representation-based retrievers have also been explored to further improve indexing and retrieval efficiency by forcing more zero entries in the embeddings. Existing methods to enforcing sparsity include using gating to select entries (e.g., SparTerm [1]), assigning zeros to non-query terms (e.g., EPIC [28]), and regularizing the vectors by minimizing flop operations (e.g., SPLADE [8] and FLOPS [39]), etc. Although these methods make the embedding vectors sparser, they do not change the basics of the dual-encoder setup (illustrated in Eq. 1). Most of them still use the over-simplified dot product as their retrieval function (e.g., SparTerm [1], EPIC [28], and SPLADE [8]). It is unclear how the benefit gained from sparse representations can be leveraged to employ more efficient data structures such as the inverted index.\nWe argue that representation-based methods, including both dense and sparse ones, have a few drawbacks for document retrieval. First, unlike traditional term-level inverted index, the index formed by a representation-based approach's embeddings cannot be easily re-used by another retrieval method. Representation-based methods focus so much on finding the representation that best represents a piece of text, aka learning the metric space as in metric learning. Every index that is learned by a representation-based method is unique to itself, which makes it hard to be re-usable by others. When one works on a representation-based method, each time she must process the document collection and rebuild the entire index. It contradicts a common expectation for an index -that it should be general and re-usable by down-stream tasks such as various retrieval methods.\nSecond, in a representation-based method, the actual retrieval function responsible for similarity matching between query and document is kept to the bare minimum. Most of them use a dot product. ColBERT [24] used a slightly more sophisticated maximum cosine similarity function, but it is still over-simplified for expressing the interaction between query and document. In fact, researchers have discovered that separating query and document in the indexing phrase and keeping their interactions at minimum hurts retrieval effectiveness. Wang et al. pointed out that it is necessary for dense, representation-based retrievers to interpolate with sparse, interaction-based retrievers such as BM25 [50] to gain better performance. Luan et al. proved that embedding size imposes an upper bound on effectiveness that a dense retriever can achieve and on text length that it can handle [27]. It explains why few successes for these approaches have been seen on first-stage, long document retrieval.\nOn the other hand, interaction-based retrievers have been known for their superior retrieval effectiveness. This can be witnessed by the long-time record set by BM25, a sparse, interaction-based method in the pre-neural era, and by the top performance set by MonoBERT [34], an all-in-all, dense interaction-based method marked by its extensive interactions among query terms and document terms. Many early NeurIR methods are also sparse, interactionbased methods [7, 13, 36-38, 43, 48, 52]. When receiving a query from the user, they generate a query-document interaction matrix in real-time and feed it into neural networks to predict a relevance score. Early interaction-based neural models do not have an index and must construct a large interaction matrix at query time, which is the main obstacle that prohibits them from succeeding in applying to first-stage, full-length documents.\nA few pieces of work have been proposed to support indexing for interaction-based neural retrievers. Most of them take advantage of an inverted index for its fast lookup functions. They share a pipeline illustrated as the following:\nProcess 𝑑 𝑖 → Store⟨𝑣, 𝑑 𝑖 ⟩ in index, ∀𝑖 ↘ Lookup 𝑞 in 𝑣 → Top-k results Process 𝑞 → 𝑞 ↗ by 𝑠 (𝑞, 𝑑 𝑖 ),(2)\nwhere 𝑣 is the vocabulary obtained from the document collection, ⟨𝑣, 𝑑 𝑖 ⟩ is the interaction between 𝑣 and document 𝑑 𝑖 , and 𝑠 (𝑞, 𝑑 𝑖 ) is a retrieval function that measures the similarity between 𝑞 and 𝑑 𝑖 . For instance, DeepCT [4] substituted term frequency with context-aware term weights aggregated from BERT embeddings [6], and stored them in an inverted index. Their followup work, HDCT [5], succeeded in full-length document retrieval. TILDE [61] stored conditional term probabilities in an inverted index to support deep query-likelihood calculations in its retrieval function. SPARTA [58] stored dot products between vocabulary terms to document terms in an inverted index.\nWhile it is encouraging to see these methods build indices for sparse, interaction-based retrievers, we find their indices are tailored to the specific neural retrieval function that they use in their retrieval phase. It is suboptimal if these indices cannot be general enough to be re-used by other neural retrievers. For instance, neural retrievers developed prior to BERT, including KRNM [43], HiNT [7], and DeepTileBars [48], are effective on document retrieval but have no index to support them. SNRM can be made to support them, as we demonstrated in our experiments, however, the matchings are done with latent terms, not actual lexical terms and the effectiveness degrades much.\nIn this paper, we propose a novel SEgment-based Neural Indexing method, SEINE, which provides a general indexing framework that can flexibly support a variety of neural retrieval methods. We focus on facilitating interaction-based retrieval methods for their higher effectiveness. During query time, a retriever can only look up pre-computed interaction function values for corresponding query terms from the index, quickly merging them into a querydocument interaction matrix and sending it to neural networks to predict a relevance score. Moreover, we adopt a flexible, segmentbased design in our index to support query-document interaction at different granularities. For instance, the query-document interaction can be done at the document-level (e.g. BM25 [43] and TILDE [61]), term-level (e.g., KRNM [43]), or topical segment-level (e.g., HiNT [7] and DeepTileBars [48]). We believe as long as we can decompose the retrieval methods and identify the interaction units used between the query and the document, we should be able to build an index that is general, modularized, and reusable. Our segment-level inverted index stores common components, which we call atomic interaction values, including term frequency, BERT embedding, conditional probabilities, etc. We also leverage Spark programming to accelerate the entire indexing process. Experiments on LETOR MQ2007 and MQ2008 datasets show that our indexing method can accelerate multiple neural retrieval methods up to 28-times faster without sacrificing much effectiveness.\nNote that the most effective neural retrievers at the moment when the paper is being written are dense, interaction-based methods, such as MonoBERT [34] and monoT5 [35], whose all-in-all interactions within its transformer blocks cannot be easily decomposed. We leave creating indices for this type of retrievers as future work and focus on sparse, interaction-based methods. " }, { "figure_ref": [], "heading": "Our work makes the following contributions:", "publication_ref": [], "table_ref": [], "text": "• We propose a general indexing framework for constructing reusable indices to support sparse, interaction-based neural retrievers; • We design our index to enable multiple sparse, interactionbased NeurIR methods that previously have no index and suffer from high latency; Our work rejuvenates them for first-stage, full-length document retrieval; • We also demonstrate how to make use of the Spark programming to accelerate the entire indexing process." }, { "figure_ref": [ "fig_0" ], "heading": "METHOD", "publication_ref": [], "table_ref": [], "text": "Our focus is on decomposing existing interaction-based neural retrievers by two measures: 1) recognizing that they perform querydocument interactions at different granularities and providing a flexible segment-based approach to suit them all; 2) identifying a list of atomic interaction function values that can be pre-computed and stored in an inverted list to support common neural retrievers. Figure 1 illustrates SEINE's framework. It supports end-to-end ad hoc document retrieval and has two phases, an indexing phase and an retrieval phase. The indexing phase is query-independent and can be done offline. During indexing, we (1) process the entire corpus and obtain a vocabulary 𝑣, (2) segment each document in the collection, and then (3) create an inverted index by interacting each vocabulary term 𝑤 𝑖 with each document 𝑑 𝑗 on the segment level. We call this interaction v-d interaction meaning each vocabulary word interacting with each document in the collection. The inverted index is used to store a list of interaction values for each word-segment pair. The number of segments per document is standardized for all documents. (4) We also adopt Spark 1 and make use of its parallel programming to accelerate the indexing process. At the retrieval phase, when a query 𝑞 comes in, each query word is looked up from the v-d interaction matrix stored in the index and the matched terms' rows are extracted and stacked into another interaction matrix called q-d interaction. We feed the q-d interaction matrix into a neural retrieval function to obtain the final relevance scores 𝑠 (𝑞, 𝑑 𝑗 ), ∀𝑗. Since the number of query terms are small, calculations at the retrieval phase is sparse and quick to finish.\n1 https://spark.apache.org/." }, { "figure_ref": [], "heading": "Pre-Processing a Document Collection", "publication_ref": [ "b50", "b55" ], "table_ref": [], "text": "To begin, we process the entire corpus 𝐶 = {𝑑 1 , 𝑑 2 , ..., 𝑑 |𝐶 | } and obtain a vocabulary 𝑣 = {𝑤 1 , 𝑤 2 , ..., 𝑤 |𝑣 | }. The vocabulary is the set of unique terms appearing in the corpus. Standard text pre-processing steps are taken to attain the vocabulary. First, we tokenize all documents in the collection 𝐶 into a sequence of tokens using the Wordpiece tokenizer [51]. Then, we remove the most frequent 10% and the least frequent 10% terms from the vocabulary based on their collection-level frequency. It is to exclude misspellings, rare words, programming scripts, punctuation, special symbols, stopwords, etc. While we take this pass on the entire document collection, we keep track of the inverse document frequency of each term (𝑖𝑑 𝑓 (𝑤 𝑖 ), 𝑤 𝑖 ∈ 𝑣) and store them:\n𝑖𝑑 𝑓 (𝑤 𝑖 ) = log |𝐶 | | { 𝑗 |𝑤 𝑖 ∈𝑑 𝑗 } |+1 .\nFor most neural retrievers that we support, we assume that all incoming query terms can be found in the vocabulary, so that at query time the q-d interaction matrix can be built by looking up in the pre-computed v-d interaction matrix, instead of calculating the q-d interactions on the fly. We are aware that this assumption may not be valid and out-of-vocabulary terms can certainly be in a query. In some neural indexers that we compare with (e.g. SNRM [56]), however, a latent semantic representation is used as the indexing unit, which can be seen as a way to alleviate the vocabulary mismatch problem." }, { "figure_ref": [], "heading": "Segmenting Documents to Support Various Interaction Granularities", "publication_ref": [ "b55", "b42", "b42", "b6", "b47", "b15", "b26" ], "table_ref": [], "text": "Existing interaction-based neural retrievers perform querydocument interaction at different granularities. Some are performed at the document-level (e.g. SNRM [56] and BM25 [43]), which interacts a query 𝑞 with an entire document 𝑑; some are at the term-level (e.g., KRNM [43]), meaning interacting a single query term and a single document term; and others are at segment-level that represent topics in a document (e.g., HiNT [7] and DeepTileBars [48]).\nWe propose a flexible segment-based indexing approach to support query-document interaction at different granularities. Our chosen interaction unit is the segment, whose size can be adjusted to include term-and document-level interactions. For a document 𝑑, we split it into non-overlapping segments following the TextTiling algorithm [16]. The segmentation is done based on the similarity between fixed-sized, neighbouring text windows. If any two neighboring windows show high similarity, we merge them into a bigger segment. If they are dissimilar, they are put into different segments. The resulting segments roughly reflect topical boundaries in a document. Unsurprisingly, the number of segments 𝑦 in one document can be quite different from that in another. To standardize 𝑦 for different documents, we (1) pad empty segments if 𝑦 <= 𝑛 𝑏 , a pre-defined, adjustable dimension parameter, or (2) squeeze all remaining text into the final segment if 𝑦 > 𝑛 𝑏 . Eventually, all documents contain an equal number of 𝑛 𝑏 segments: 𝑑 = {𝑆 1 , 𝑆 2 , ..., 𝑆 𝑛 𝑏 }. When 𝑛 𝑏 = 1, it is equivalent to interacting at the document-level; when 𝑛 𝑏 = |𝑣 |, it is equivalent to interacting at the term-level. Note that driven by a completely different motivation, [27] also spoke about the advantage of using multiple, fixed length vectors, instead of a single vector, to represent a document. Here we introduce segment-level indexing for its flexibility to cover query-document interactions at all levels of granularities, when 𝑛 𝑏 is set to different values." }, { "figure_ref": [], "heading": "Storing Atomic Interactions in Inverted Index", "publication_ref": [ "b42", "b47", "b42", "b6", "b47", "b30", "b5", "b35", "b10", "b51", "b6", "b51", "b51", "b47", "b60" ], "table_ref": [], "text": "In this paper, we propose to decompose components in existing neural retrievers that are related to query-document interactions into atomic interaction functions and store their values into the index. These atomic interactions include term frequency, operations over BERT embeddings, kernel functions, conditional probabilities, etc.\nFor each term 𝑤 ∈ 𝑣 and an interaction unit 𝑆, where 𝑆 is a text segment (which can be adjusted to represent a document or a term), we pre-calculate and store the following atomic interaction function values:\n• Term frequency: 𝑡 𝑓 (𝑤, 𝑆), the number of occurrences of term 𝑤 in 𝑆. It can be stored to support traditional retrieval methods such as BM25 [43] and neural retrievers such as DeepTileBars [48]. • Indicative inverted document frequency: 𝑖𝑑 𝑓 (𝑤) × I 𝑆 (𝑤), where 𝑖𝑑 𝑓 (𝑤) is the inverse document frequency of term 𝑤 and I 𝑆 (𝑤) is indicates whether 𝑤 is in 𝑆. This function can be used to support traditional retrieval methods such as BM25 [43] and the neural retrievers such as HiNT [7] and DeepTileBars [48]. • Dot product: 𝑡 ∈𝑆 𝐸 (𝑤) • 𝐸 (𝑡), where 𝐸 (.) is an embedding output from a pre-trained neural encoder, such as word2vec [31] or BERT [6]. The dot product measures similarity between the two embeddings. In theory, 𝐸 (.) can be any dense or sparse representations for a text sequence. Therefore, this interaction function can be used to store interaction between the dense representations of a word and a segment. It is thus can be used to support MatchPyramid [36] and dense retrievers such as COIL [11].\n• Cosine similarity: 𝑡 ∈𝑆 𝐸 (𝑤 ) •𝐸 (𝑡 ) |𝐸 (𝑤 ) | • |𝐸 (𝑡 ) |\n, similarly to the dot product and used as another similarity function. This function supports neural retrievers such as KNRM [52] and HiNT [7].\n• Gaussian kernel: max 𝑡 ∈𝑆 𝑒𝑥𝑝 (-(𝐸 (𝑤) -𝐸 (𝑡)) 2 ), proposed by [52] to measure the distance between two terms within a semantic neighborhood. It can be used to find the most similar synonym to a word in 𝑆. It supports KRNM [52] and DeepTileBars [48]. is the conditional probability for term 𝑤 in text 𝑆 and can be obtained by using a language modeling head on the [CLS] token of 𝐸 𝑤 (𝑆), which is a vocabulary term 𝑤's BERT embedding in text 𝑆. It can be used to support deep query likelihood models such as TILDE [61].\nSome of these atomic interaction functions are proven to be essential for ad-hoc retrieval and others are widely used in interactionbased neural retrievers. Note that the list of functions is not exhaustive and one can certainly expand the list or choose a subset of functions in their index. One condition must be satisfied to identify those atomic functions -that the vocabulary entries, in our case the terms, must be independent of each other. In all-in-all interaction-based methods, such as MonoBERT, however, terms interact within and across a query and document and the interactions in the transformer blocks cannot be easily decomposed. We thus do not support them.\nWe define a vocab-segment interaction column vector 𝑀 (𝑤 𝑖 , 𝑆 𝑘 ) for the 𝑖 𝑡ℎ vocabulary word 𝑤 𝑖 and the 𝑘 𝑡ℎ segment (across all documents), and keep each of the above atomic interaction function values in it. These term-segment interactions 𝑀 (𝑤 𝑖 , 𝑆 𝑘 ) are then form the v-d interaction matrix:\n𝑀 𝑣,𝑑 = 𝑐𝑜𝑛𝑐𝑎𝑡 { 𝑀 (𝑣, 𝑆 1 ), 𝑀 (𝑣, 𝑆 2 ), ... , 𝑀 (𝑣, 𝑆 𝑘 ), ... , 𝑀 (𝑣, 𝑆 𝑛 𝑏 ) } (3)\nwhere 𝑀 (𝑣, 𝑆 𝑘 ) is obtained by combining for all terms 𝑤 𝑖 ∈ 𝑣, ∀𝑖 in the 𝑘 𝑡ℎ segment. Eventually, we generate an interaction matrix of dimension |𝑣 | × 𝑛 𝑏 × 𝑛 𝑓 which stores the atomic interactions for every vocab term-document pair, where |𝑣 | is the vocabulary size, 𝑛 𝑏 is the number of segments in a document, and 𝑛 𝑓 is the number of atomic interaction functions.\nAt retrieval time, we obtain the q-d interaction matrix for an incoming query by looking up query terms in the vocabulary and stacking the rows in the index that stores their pre-computed interaction scores:\n𝑀 𝑞,𝑑 = 𝑠𝑡𝑎𝑐𝑘 𝑤 𝑖 ∈𝑞∩𝑣 { 𝑀 𝑤 1 ,𝑑 , 𝑀 𝑤 2 ,𝑑 , ... , 𝑀 𝑤 𝑖 ,𝑑 ...}.(4)\nAlgorithm 1 Spark pseudo-code for indexing. Our index can support neural interactions at different granularities, only by varying the segment size. To support neural retriever with document-level interactions, we let the segment size equal the document length so that there is one segment and it is 𝑑. The v-d interaction matrix becomes 𝑀 𝑣,𝑑 = 𝑀 (𝑣, 𝑑). To support neural retrieval methods with term-level interactions, we treat each term as a segment, i.e. the segment size is one term. The v-d interaction matrix becomes: 𝑀 𝑤 𝑖 ,𝑑 = 𝑐𝑜𝑛𝑐𝑎𝑡 { 𝑀 (𝑤 𝑖 , 𝑡 1 ), 𝑀 (𝑤 2 , 𝑡 2 ), ... , 𝑀 (𝑤 𝑖 , 𝑡 𝑛 𝑑 ) }, and 𝑛 𝑑 is the document length." }, { "figure_ref": [], "heading": "Accelerating with Spark", "publication_ref": [ "b54", "b21", "b4", "b5" ], "table_ref": [], "text": "Large corpora can contain tens of millions of documents and a vocabulary of tens of thousands of words. Given a dozen to three dozens of segments in a document, there can be trillions of querysegment interactions when building the index. In our implementation, we leverage Spark [55] to accelerate the indexing process. Spark uses a special data structure called the resilient distributed dataset (RDD), which can hold a large distributed collection of objects and has a built-in parallel programming infrastructure. Each RDD automatically splits into multiple data partitions and can be computed on different computer nodes [22]. Spark has two types of programming functions, transformations and actions, and uses a lazy evaluation mechanism. Transformations are not real computations but prototyping functions that wait for computation paths optimization and data parallelization are done by the underlying Spark infrastructure. Actions do the actual computations, but only when it is absolutely necessary to compute.\nWe employ Spark to accelerate the indexing process. After obtaining the vocabulary 𝑣 and segmenting all documents into segments, we perform v-d interaction over all the terms in 𝑣 and all segmented documents in corpus 𝐶: (1) We create RDDs for both vocabulary term list and document list. (2) We use a transformation operation cartesian(•, •) to compute a Cartesian product between two RDDs, for example we have a vocabulary 𝑉 = {𝑎𝑝𝑝𝑙𝑒, 𝑏𝑎𝑛𝑎𝑛𝑎} and a collection 𝐶 = {𝑑1, 𝑑2}, the Cartesian function returns a list of term-document pairs: {(𝑎𝑝𝑝𝑙𝑒, 𝑑1), (𝑎𝑝𝑝𝑙𝑒, 𝑑2), (𝑏𝑎𝑛𝑎𝑛𝑎, 𝑑1), (𝑏𝑎𝑛𝑎𝑛𝑎, 𝑑2)}. (3) We use another transformation operation map(•) to calculate interaction matrix by applying function interaction(•) to v-d pairs, where interaction(•) is defined in Section 2.3. (4) In order to balance the memory storage with information loss, we use a transformation operation filter to control the index sparsity by the threshold 𝜎 𝑖𝑛𝑑𝑒𝑥 . For example, if 𝜎 𝑖𝑛𝑑𝑒𝑥 = 0, only the documents containing the corresponding term are stored in the index. With 𝜎 𝑖𝑛𝑑𝑒𝑥 > 0 can further improve the sparsity and efficiency of the index, but it may lead to information loss and effectiveness drop. (5) We combine and reshape the word-segment interaction into a v-d interaction matrix. (6) We then use an action operation to write the results into disk. Algorithm 1 outlines our Spark implementation." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [ "b39", "b18", "b59" ], "table_ref": [], "text": "We experiment on LETOR 4.0, a widely used benchmark dataset for document retrieval. It uses the Gov2 web page collection (∼2M pages) and two query sets from TREC Million Query (MQ) 2007 and 2008 Tracks [40]. MQ2007 contains about 1,700 queries and 65,323 annotated documents; MQ2008 800 queries and 15,211 annotated documents. We use the official effectiveness metrics in LETOR. They include Precision (P), Normalized Discounted Cumulative Gain (nDCG) [19] at various positions, and Mean Average Precision (MAP) [60]. For efficiency measures, we report the wall clock time in milliseconds for processing a query-document pair during training and during testing, respectively. The training time calculates the average time used for each sample (𝑞, 𝑑 1 , 𝑑 2 , 𝑙𝑎𝑏𝑒𝑙) per epoch. We compare the time spent by a retriever when there is no index supporting it versus there is an index. For experimental runs with No Index, the training time includes the time spent on generating the interaction matrix; for experimental runs with an an index, it contains the time to lookup from the index. The test time calculates the average time spent on predicting a score for each query-document pair (𝑞, 𝑑). For all runs, we adopt a five-fold cross-validation and report its results." }, { "figure_ref": [], "heading": "Baseline Runs", "publication_ref": [ "b55", "b42", "b3", "b51", "b6", "b47", "b33", "b55", "b30" ], "table_ref": [], "text": "We organize the experiments by separating a retriever's indexing method from its retrieval method and report the effectiveness and efficiency of the combinations. All runs in our experiments perform first-stage document retrieval, not re-ranking nor passage retrieval. Note that there are many recent neural retrievers being proposed, we select a few representative ones for their advanced retrieval functions, especially for those early neural methods that had no index nor dense representation to remedy the efficiency issue. Previously, they were limited to act as a re-ranker on passage retrieval tasks. In this work, one of our main purpose is to rejuvenate them to apply to first-stage, full-length document retrieval. We therefore select the following retrieval methods for our experimentation:\n• Dot Product In [56], they calculated the relevance score by 𝑠 (𝑞, 𝑑) = |𝑞 𝑖 |>0 𝑞 𝑖 𝑑 𝑖 , where 𝑞 𝑖 are the non-zero elements of 𝑞 and 𝑑 𝑖 is the document with non-zero elements in the i-th index. For our runs with SEINE index, we enable dot product and score the relevance by summing over all query terms.\n• BM25 [43]: We compare BM25 results using an inverted index with conventional bag-of-words term weights vs. using the inverted index with one of SEINE's interaction values turned on. The interaction is the BERT-based term weight proposed by DeepCT [4].\n• KNRM [52]: an interaction-based neural retrieval method that performs term-level interaction over term embeddings, then uses kernel-pooling to extract multi-level soft matching features to learn the relevance score.\n• HiNT [7]: a hierarchical neural retrieval method that generates segment-level interaction matrices as the network's input, then uses a local matching layer and a global decision layer to learn the relevance score.\n• DeepTileBars [48]: a neural retrieval method that segments documents by topics then scans them by multiple varied-sized Convolutional neural networks (CNNs) to learn the relevance score.\nFor the indexing methods, we experiment on:\n• No Index: This is the case when a retriever directly processes the document collection and interacts a query to all documents at query time. Most neural retrievers are of this type, including the most effective MonoBERT [34]. In our experiments, we test on the above-mentioned neural retrievers to illustrate.\n• Inverted Index (InvIdx): the traditional indexing method using the bag-of-words representation. It stores a posting list for each vocabulary term 𝑤, consisting all the (𝑑, 𝑡 𝑓 (𝑤, 𝑑)) pairs for every document 𝑑 containing 𝑤. Note that InvIdx does not store any semantic interactions nor embedding-based interactions, so it cannot support the recent neural retrieval methods.\n• SNRM [56]: a neural indexing method that learns a sparse latent representation for the interaction between each pair of training query and document, and stores the latent nodes in an inverted index. Note that in SNRM, the latent words are independent of each other, which makes it satisfy the condition that we mentioned in Section 2.3 that vocabulary entries must be independent of each other. We can therefore use the latent nodes as vocabulary entries in the inverted index and manage to apply SNRM to KNRM, HiNT, and DeepTileBars. We first represent the query and document as a sequence of latent words, and then generate the interaction matrix based on the latent words. The average document length is use the following parameter settings in our implementations. To keep a manageable vocabulary size, we use the middle 80% frequent terms in the vocabulary. For LETOR 4.0, the vocabulary is around 40,000 words. We set 𝜎 𝑖𝑛𝑑𝑒𝑥 = 0, which means documents containing the corresponding term and only them are stored into the index. With 𝜎 𝑖𝑛𝑑𝑒𝑥 > 0 can further improve the sparsity and efficiency of the index, but it may lead to information loss and effectiveness drop; so we set it to be zero. For all experimental runs, whenever possible, we use the default settings recommended in their original published papers. We employ the pretrained word2vec model [31] to represent terms for KNRM, HiNT, and DeepTileBars, and BERT embeddings in BERT-base-uncased for DeepCT." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "Table 1 reports the effectiveness and efficiency results for a few retrieval methods, such as KNRM, HiNT, and DeepTileBars, combined with different indexing methods on LETOR 4.0 first-stage document retrieval task. Our experiments demonstrated similar results for both MQ2007 and MQ2008. Here No Index is the baseline indexing method, which shows the performances when a retrieval method processes the entire document collection at query time without using any index. InvIdx can only support bag-of-words retrievers and we show BM25 to illustrate. SNRM and SEINE can both be used to support various retrieval methods. Note the original DeepCT system is included as one run under SEINE since we include DeepCT weights in our index.\nBoth SNRM and SEINE are able to significantly improve retrieval efficiency. SNRM makes the index sparse enough to handle a large corpus by mapping document text to new latent terms. Its sparsity helps reduce the amount of the interaction matrix calculation, thereby speeding up the retrieval phase. For instance, it gets 1.2×, 1.2×, 1.2× faster for training the KNRM, HiNT, DeepTileBars models, and 1.3×, 1.1×, 1.3× for test on the MQ2007 dataset. However, SNRM has a large degradation, ranging from -40% to -9% on the effectiveness metrics, over No Index. This might be caused by SNRM's ineffectiveness in lexical matching, i.e., exact matching. SNRM is good at semantic matching, which aims to address a variety of linguistic phenomena, such as synonymy, paraphrase, and term variation; its index only stores the latent terms. Although it can be applicable to multiple neural retrievers as SEINE does, it is difficult for SNRM to get overlapping terms between query and document. Lexical information is lost in the latent semantic nodes. On the LETOR datsset, our results indicate SNRM makes a poor trade-off between efficiency and information loss.\nOn the contrary, SEINE stores the term-segment interaction in its index and generates a q-d interaction matrix by looking up and stacking rows for actual query terms. With minimal degradation in effectiveness, SEINE gets 3.7×, 1.4×, 7.4× faster for training the KNRM, HiNT, and DeepTileBars models, and 13.7×, 1.4×, 28.1× for test on the MQ2007 dataset. Similar results on the MQ2008 dataset can be observed." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Impact of Segment Size", "publication_ref": [], "table_ref": [], "text": "We also look into the effects of different segment size in SEINE. We experiment with different number of segments per document using DeepTileBars+SEINE and report the findings on the MQ2008 dataset. Figure 2(a) is about effectiveness. It shows that using 20 segments per document performs the best for precision (in fact for all other effectiveness metrics too). Figure 2(b) is about efficiency. It shows that as the number of segments per document increases, the neural networks take longer time to train while the test time remains more or less the same. We also find that the average segment length for the best choices of segments per document, 20 and 30, are around 270 and 200 words respectively. This length is about the length of a natural paragraph. It suggests that we should select our segment size close to an author's topical breaks, instead of choosing too large a segment size just for the sake of increasing training efficiency. At the same time, since test time does not vary much with segment size, we can select a smaller number of segments to improve query run-time efficiency." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [ "b29", "b27", "b60", "b29", "b27", "b60" ], "table_ref": [], "text": "This paper proposes SEINE, a SEgment-based Indexing method for NEural information retrieval, that moves heavy computations in interaction-based neural retrievers offline and only keeps essential calculations at query time. It provides a general indexing framework and flexibly supports a variety of interaction-based neural retrieval methods. During the indexing phase, we build a vocabulary from the entire corpus and compute and store the vocabulary-segment interactions in the index. We propose to use segment-level inverted index to store the atomic query-document interaction values. Our indexing method can make the query run-time up to 28 times faster without sacrificing their effectiveness on LETOR 4.0 for first-stage, full-length document retrieval.\nWe propose to store atomic interactions between vocabulary term and segments in an inverted index. These interactions include term frequencies, inverse document frequency, dot products, operations over BERT embeddings, conditional probabilities, etc. Some of them are adopted from a few recent first-stage retrievers, such as DeepImpact [30], EPIC [28], and TILDE [61]. Our experiments did not include them because (1) our main focus is to rejuvenate the index-less re-rankers that achieve high performance, but suffer in terms of efficiency. However, (2) we do identify the atomic interaction function in DeepImpact [30], EPIC [28], and TILDE [61], and include them in our index. We hope re-building an index for neural retrieval can be avoided as much as possible so that researchers can shift their attention back to creating versatile retrieval functions.\nCurrently, SEINE does not support MonoBERT, an all-in-all, dense interaction-based method, the most effective retriever at the moment. In all-in-all interaction, terms interact within and across a pair of query and document. To implement SEINE for such methods, we need to understand how to decompose these all-in-all interactions within the transformer blocks into some function of independent vocabulary entries. We leave this as future work." } ]
Many early neural Information Retrieval (NeurIR) methods are re-rankers that rely on a traditional first-stage retriever due to expensive query time computations. Recently, representation-based retrievers have gained much attention, which learns query representation and document representation separately, making it possible to pre-compute document representations offline and reduce the workload at query time. Both dense and sparse representationbased retrievers have been explored. However, these methods focus on finding the representation that best represents a text (aka metric learning) and the actual retrieval function that is responsible for similarity matching between query and document is kept at a minimum by using dot product. One drawback is that unlike traditional term-level inverted index, the index formed by these embeddings cannot be easily re-used by another retrieval method. Another drawback is that keeping the interaction at minimum hurts retrieval effectiveness. On the contrary, interaction-based retrievers are known for their better retrieval effectiveness. In this paper, we propose a novel SEgment-based Neural Indexing method, SEINE, which provides a general indexing framework that can flexibly support a variety of interaction-based neural retrieval methods. We emphasize on a careful decomposition of common components in existing neural retrieval methods and propose to use segment-level inverted index to store the atomic query-document interaction values. Experiments on LETOR MQ2007 and MQ2008 datasets show that our indexing method can accelerate multiple neural retrieval methods up to 28-times faster without sacrificing much effectiveness.
SEINE: SEgment-based Indexing for NEural information retrieval
[ { "figure_caption": "Figure 1 :1Figure 1: SEINE: SEgment-based Indexing for NEural information retrieval.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Effectiveness and Efficiency Results with different number of segments. Experimented with DeepTile-Bars+SEINE on MQ2008.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "• Linear aggregation on BERT Embeddings: 𝑎 • 𝐸 𝑤 (𝑆) + 𝑏, where 𝐸 𝑤 (𝑆) is a vocabulary term 𝑤's BERT embedding in text 𝑆. It linearly combines BERT embeddings using learned weights 𝑎, 𝑏 and can be thought of an aggregated contextual term weight for 𝑤 in 𝑆. It supports DeepCT[4] and can be used in combination with traditional retrievers such as BM25.• Max operation on BERT Embeddings: max 𝑡 ∈𝑆 𝑓 𝑆 (𝐸 (𝑡)) • 𝐸 (𝑤), where 𝑓 𝑆 is the logarithm of the softplus over BERT embeddings. 𝐸 (•) is a BERT embedding. This function selects the most similar term in a piece of text to a vocabulary term (𝑤) and records their similarity. It can be used to support EPIC[28] and ColBERT[24]. • Multi-layer Perceptron on BERT Embeddings: 𝑀𝐿𝑃 (𝐸 𝑤 (𝑆)), where 𝑀𝐿𝑃 (•) is multilayer perceptron with activations over BERT embeddings. It can be used to support retrievers such as DeepImpact [30]. • Log conditional probability: log 𝑃 𝜃 (𝑤 |𝑆), where 𝑃 𝜃 (𝑤 |𝑆)", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Initialize Spark environment and configuration 2: Import functions segmentation, interaction 3: Vocab ← 𝑅𝐷𝐷 {𝑤 1 , 𝑤 2 , ..., 𝑤 |𝑉 | } ⊲ create RDD 4: Corpus ← 𝑅𝐷𝐷 {𝑑 1 , 𝑑 2 , ..., 𝑑 |𝐶 | }", "figure_data": "⊲ create RDD5: Segmts ← Corpus.map (segmentation)⊲ document segmentation6: Cart ← Vocab.𝑐𝑎𝑟𝑡𝑒𝑠𝑖𝑎𝑛 (Segmts)7: Index ← Cart.map (interaction)⊲ calculate 𝑀 as in § 2.38: Index ← Index.filter (𝑡 𝑓 > 𝜎 𝑖𝑛𝑑𝑒𝑥 )9: Index ← Index.reshape⊲ v-S to v-d10: Index.saveAsPickleFile ()", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Retrieval Effectiveness and Efficiency on MQ2007 and MQ2008. * denotes statistically significant degradation on effectiveness and † for statistically significant improvement on efficiency compared to corresponding retrieval method with \"No Index\". Results reported in Fan et al.[7]. ‡ Indexing method is not applicable to KNRM, HiNT, and DeepTileBars.", "figure_data": "(a) MQ2007.", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" } ]
Sibo Dong; Justin Goldstein; Grace Hui Yang
[ { "authors": "Yang Bai; Xiaoguang Li; Gang Wang; Chaoliang Zhang; Lifeng Shang; Jun Xu; Zhaowei Wang; Fangshan Wang; Qun Liu", "journal": "", "ref_id": "b0", "title": "SparTerm: Learning termbased sparse representation for fast text retrieval", "year": "2020" }, { "authors": "Stéphane Clinchant; Florent Perronnin", "journal": "", "ref_id": "b1", "title": "Aggregating continuous word embeddings for information retrieval", "year": "2013" }, { "authors": "Paolo Cremonesi; Yehuda Koren; Roberto Turrin", "journal": "Association for Computing Machinery", "ref_id": "b2", "title": "Performance of Recommender Algorithms on Top-n Recommendation Tasks", "year": "2010" }, { "authors": "Zhuyun Dai; Jamie Callan", "journal": "", "ref_id": "b3", "title": "Context-aware sentence/passage term importance estimation for first stage retrieval", "year": "2019" }, { "authors": "Zhuyun Dai; Jamie Callan", "journal": "Association for Computing Machinery", "ref_id": "b4", "title": "Context-Aware Document Term Weighting for Ad-Hoc Search", "year": "2020" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b5", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Yixing Fan; Jiafeng Guo; Yanyan Lan; Jun Xu; Chengxiang Zhai; Xueqi Cheng", "journal": "", "ref_id": "b6", "title": "Modeling diverse relevance patterns in ad-hoc retrieval", "year": "2018" }, { "authors": "Thibault Formal; Benjamin Piwowarski; Stéphane Clinchant", "journal": "Association for Computing Machinery", "ref_id": "b7", "title": "SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking", "year": "2021" }, { "authors": "Debasis Ganguly; Dwaipayan Roy; Mandar Mitra; Gareth; Jones", "journal": "", "ref_id": "b8", "title": "Word embedding based generalized language model for information retrieval", "year": "2015" }, { "authors": "Luyu Gao; Jamie Callan", "journal": "", "ref_id": "b9", "title": "Condenser: a Pre-training Architecture for Dense Retrieval", "year": "2021" }, { "authors": "Luyu Gao; Zhuyun Dai; Jamie Callan", "journal": "", "ref_id": "b10", "title": "COIL: Revisit Exact Lexical Match in Information Retrieval with Contextualized Inverted List", "year": "2021" }, { "authors": "Mihajlo Grbovic; Nemanja Djuric; Vladan Radosavljevic; Narayan Bhamidipati", "journal": "", "ref_id": "b11", "title": "Search retargeting using directed query embeddings", "year": "2015" }, { "authors": "Jiafeng Guo; Yixing Fan; Qingyao Ai; Bruce Croft", "journal": "", "ref_id": "b12", "title": "A deep relevance matching model for ad-hoc retrieval", "year": "2016" }, { "authors": "Ruiqi Guo; Philip Sun; Erik Lindgren; Quan Geng; David Simcha; Felix Chern; Sanjiv Kumar", "journal": "PMLR", "ref_id": "b13", "title": "Accelerating Large-Scale Inference with Anisotropic Vector Quantization", "year": "2020" }, { "authors": "Parth Gupta; Kalika Bali; Rafael E Banchs; Monojit Choudhury; Paolo Rosso", "journal": "", "ref_id": "b14", "title": "Query expansion for mixed-script information retrieval", "year": "2014" }, { "authors": "A Marti; Hearst", "journal": "", "ref_id": "b15", "title": "Multi-paragraph segmentation of expository text", "year": "1994" }, { "authors": "Po-Sen Huang; Xiaodong He; Jianfeng Gao; Li Deng; Alex Acero; Larry Heck", "journal": "", "ref_id": "b16", "title": "Learning deep structured semantic models for web search using clickthrough data", "year": "2013" }, { "authors": "Gautier Izacard; Edouard Grave", "journal": "", "ref_id": "b17", "title": "Leveraging passage retrieval with generative models for open domain question answering", "year": "2020" }, { "authors": "Kalervo Järvelin; Jaana Kekäläinen", "journal": "ACM Transactions on Information Systems (TOIS)", "ref_id": "b18", "title": "Cumulated gain-based evaluation of IR techniques", "year": "2002" }, { "authors": "Jeff Johnson; Matthijs Douze; Hervé Jégou", "journal": "IEEE Transactions on Big Data", "ref_id": "b19", "title": "Billion-scale similarity search with gpus", "year": "2019" }, { "authors": "I T Jolliffe", "journal": "Springer Verlag", "ref_id": "b20", "title": "Principal Component Analysis", "year": "1986" }, { "authors": "Holden Karau; Andy Konwinski; Patrick Wendell; Matei Zaharia", "journal": "O'Reilly Media, Inc", "ref_id": "b21", "title": "Learning spark: lightning-fast big data analysis", "year": "2015" }, { "authors": "Vladimir Karpukhin; Barlas Oğuz; Sewon Min; Patrick Lewis; Ledell Wu; Sergey Edunov; Danqi Chen; Wen-Tau Yih", "journal": "", "ref_id": "b22", "title": "Dense passage retrieval for opendomain question answering", "year": "2020" }, { "authors": "Omar Khattab; Matei Zaharia", "journal": "Association for Computing Machinery", "ref_id": "b23", "title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT", "year": "2020" }, { "authors": "Kenton Lee; Ming-Wei Chang; Kristina Toutanova", "journal": "", "ref_id": "b24", "title": "Latent Retrieval for Weakly Supervised Open Domain Question Answering", "year": "2019" }, { "authors": "Zhengdong Lu; Hang Li", "journal": "Advances in neural information processing systems", "ref_id": "b25", "title": "A deep architecture for matching short texts", "year": "2013" }, { "authors": "Yi Luan; Jacob Eisenstein; Kristina Toutanova; Michael Collins", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b26", "title": "Sparse, Dense, and Attentional Representations for Text Retrieval", "year": "2021" }, { "authors": "Sean Macavaney; Maria Franco; Raffaele Nardini; Nicola Perego; Nazli Tonellotto; Ophir Goharian; Frieder", "journal": "Association for Computing Machinery", "ref_id": "b27", "title": "Expansion via Prediction of Importance with Contextualization", "year": "2020" }, { "authors": "Yu A Malkov; D A Yashunin", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b28", "title": "Efficient and Robust Approximate Nearest Neighbor Search Using Hierarchical Navigable Small World Graphs", "year": "2020-04" }, { "authors": "Antonio Mallia; Omar Khattab; Torsten Suel; Nicola Tonellotto", "journal": "", "ref_id": "b29", "title": "Learning passage impacts for inverted indexes", "year": "2021" }, { "authors": "Tomas Mikolov; Ilya Sutskever; Kai Chen; Greg S Corrado; Jeff Dean", "journal": "Advances in neural information processing systems", "ref_id": "b30", "title": "Distributed representations of words and phrases and their compositionality", "year": "2013" }, { "authors": "Mitra Bhaskar", "journal": "", "ref_id": "b31", "title": "Exploring session context using distributed representations of queries and reformulations", "year": "2015" }, { "authors": "Nick Bhaskar Mitra; Craswell", "journal": "Now Foundations and Trends", "ref_id": "b32", "title": "An introduction to neural information retrieval", "year": "2018" }, { "authors": "Rodrigo Nogueira; Kyunghyun Cho", "journal": "", "ref_id": "b33", "title": "Passage Re-ranking with BERT", "year": "2019" }, { "authors": "Rodrigo Nogueira; Zhiying Jiang; Jimmy Lin", "journal": "", "ref_id": "b34", "title": "Document ranking with a pretrained sequence-to-sequence model", "year": "2020" }, { "authors": "Liang Pang; Yanyan Lan; Jiafeng Guo; Jun Xu; Xueqi Cheng", "journal": "", "ref_id": "b35", "title": "A study of matchpyramid models on ad-hoc retrieval", "year": "2016" }, { "authors": "Liang Pang; Yanyan Lan; Jiafeng Guo; Jun Xu; Shengxian Wan; Xueqi Cheng", "journal": "", "ref_id": "b36", "title": "Text matching as image recognition", "year": "2016" }, { "authors": "Liang Pang; Yanyan Lan; Jiafeng Guo; Jun Xu; Jingfang Xu; Xueqi Cheng", "journal": "", "ref_id": "b37", "title": "Deeprank: A new deep architecture for relevance ranking in information retrieval", "year": "2017" }, { "authors": "Biswajit Paria; Chih-Kuan Yeh; Ian Eh Yen; Ning Xu; Pradeep Ravikumar; Barnabás Póczos", "journal": "", "ref_id": "b38", "title": "Minimizing flops to learn efficient sparse representations", "year": "2020" }, { "authors": "Tao Qin; Tie-Yan Liu", "journal": "", "ref_id": "b39", "title": "Introducing LETOR 4.0 datasets", "year": "2013" }, { "authors": "Yingqi Qu; Yuchen Ding; Jing Liu; Kai Liu; Ruiyang Ren; Wayne Xin Zhao; Daxiang Dong; Hua Wu; Haifeng Wang", "journal": "", "ref_id": "b40", "title": "RocketQA: An optimized training approach to dense passage retrieval for open-domain question answering", "year": "2020" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "", "ref_id": "b41", "title": "Sentence-bert: Sentence embeddings using siamese bert-networks", "year": "2019" }, { "authors": "Stephen Robertson; Hugo Zaragoza", "journal": "Now Publishers Inc", "ref_id": "b42", "title": "The probabilistic relevance framework: BM25 and beyond", "year": "2009" }, { "authors": "Peter Schäuble", "journal": "Association for Computing Machinery", "ref_id": "b43", "title": "SPIDER: A Multiuser Information Retrieval System for Semistructured and Dynamic Data", "year": "1993" }, { "authors": "Aliaksei Severyn; Alessandro Moschitti", "journal": "", "ref_id": "b44", "title": "Learning to rank short text pairs with convolutional deep neural networks", "year": "2015" }, { "authors": "Yelong Shen; Xiaodong He; Jianfeng Gao; Li Deng; Grégoire Mesnil", "journal": "", "ref_id": "b45", "title": "Learning semantic representations using convolutional neural networks for web search", "year": "2014" }, { "authors": "Anshumali Shrivastava; Ping Li", "journal": "Advances in neural information processing systems", "ref_id": "b46", "title": "Asymmetric LSH (ALSH) for sublinear time maximum inner product search (MIPS)", "year": "2014" }, { "authors": "Zhiwen Tang; Grace Hui; Yang ", "journal": "", "ref_id": "b47", "title": "Deeptilebars: Visualizing term distribution for neural information retrieval", "year": "2019" }, { "authors": "Laurens Van Der Maaten; Geoffrey Hinton", "journal": "Journal of Machine Learning Research", "ref_id": "b48", "title": "Visualizing Data using t-SNE", "year": "2008" }, { "authors": "Shuai Wang; Shengyao Zhuang; Guido Zuccon", "journal": "", "ref_id": "b49", "title": "BERT-Based Dense Retrievers Require Interpolation with BM25 for Effective Passage Retrieval", "year": "2021" }, { "authors": "Yonghui Wu; Mike Schuster; Zhifeng Chen; V Quoc; Mohammad Le; Wolfgang Norouzi; Maxim Macherey; Yuan Krikun; Qin Cao; Klaus Gao; Macherey", "journal": "", "ref_id": "b50", "title": "Google's neural machine translation system: Bridging the gap between human and machine translation", "year": "2016" }, { "authors": "Chenyan Xiong; Zhuyun Dai; Jamie Callan; Zhiyuan Liu; Russell Power", "journal": "", "ref_id": "b51", "title": "End-to-end neural ad-hoc ranking with kernel pooling", "year": "2017" }, { "authors": "Lee Xiong; Chenyan Xiong; Ye Li; Kwok-Fung Tang; Jialin Liu; Paul Bennett; Junaid Ahmed; Arnold Overwijk", "journal": "", "ref_id": "b52", "title": "Approximate nearest neighbor negative contrastive learning for dense text retrieval", "year": "2020" }, { "authors": "Wei Yang; Yuqing Xie; Aileen Lin; Xingyu Li; Luchen Tan; Kun Xiong; Ming Li; Jimmy Lin", "journal": "", "ref_id": "b53", "title": "End-to-end open-domain question answering with bertserini", "year": "2019" }, { "authors": "Matei Zaharia; Mosharaf Chowdhury; J Michael; Scott Franklin; Ion Shenker; Stoica", "journal": "HotCloud", "ref_id": "b54", "title": "Spark: Cluster computing with working sets", "year": "2010" }, { "authors": "Hamed Zamani; Mostafa Dehghani; Bruce Croft; Erik Learned-Miller; Jaap Kamps", "journal": "", "ref_id": "b55", "title": "From neural re-ranking to neural ranking: Learning a sparse representation for inverted indexing", "year": "2018" }, { "authors": "Jingtao Zhan; Jiaxin Mao; Yiqun Liu; Min Zhang; Shaoping Ma", "journal": "", "ref_id": "b56", "title": "Repbert: Contextualized text embeddings for first-stage retrieval", "year": "2020" }, { "authors": "Tiancheng Zhao; Xiaopeng Lu; Kyusong Lee", "journal": "", "ref_id": "b57", "title": "Sparta: Efficient opendomain question answering via sparse transformer matching retrieval", "year": "2020" }, { "authors": "Guoqing Zheng; Jamie Callan", "journal": "", "ref_id": "b58", "title": "Learning to reweight terms with distributed representations", "year": "2015" }, { "authors": "Mu Zhu", "journal": "", "ref_id": "b59", "title": "Recall, precision and average precision", "year": "2004" }, { "authors": "Shengyao Zhuang; Guido Zuccon", "journal": "Association for Computing Machinery", "ref_id": "b60", "title": "TILDE: Term Independent Likelihood MoDEl for Passage Re-Ranking", "year": "2021" } ]
[ { "formula_coordinates": [ 2, 322.38, 104.16, 236.24, 28.46 ], "formula_id": "formula_1", "formula_text": "Process 𝑑 𝑖 → Store⟨𝑣, 𝑑 𝑖 ⟩ in index, ∀𝑖 ↘ Lookup 𝑞 in 𝑣 → Top-k results Process 𝑞 → 𝑞 ↗ by 𝑠 (𝑞, 𝑑 𝑖 ),(2)" }, { "formula_coordinates": [ 3, 439.66, 393.12, 102.57, 14.37 ], "formula_id": "formula_2", "formula_text": "𝑖𝑑 𝑓 (𝑤 𝑖 ) = log |𝐶 | | { 𝑗 |𝑤 𝑖 ∈𝑑 𝑗 } |+1 ." }, { "formula_coordinates": [ 4, 69.77, 641.7, 149.63, 14.33 ], "formula_id": "formula_3", "formula_text": "• Cosine similarity: 𝑡 ∈𝑆 𝐸 (𝑤 ) •𝐸 (𝑡 ) |𝐸 (𝑤 ) | • |𝐸 (𝑡 ) |" }, { "formula_coordinates": [ 4, 321.86, 562.57, 236.82, 8.47 ], "formula_id": "formula_4", "formula_text": "𝑀 𝑣,𝑑 = 𝑐𝑜𝑛𝑐𝑎𝑡 { 𝑀 (𝑣, 𝑆 1 ), 𝑀 (𝑣, 𝑆 2 ), ... , 𝑀 (𝑣, 𝑆 𝑘 ), ... , 𝑀 (𝑣, 𝑆 𝑛 𝑏 ) } (3)" }, { "formula_coordinates": [ 4, 350.56, 701.71, 208.12, 9.53 ], "formula_id": "formula_5", "formula_text": "𝑀 𝑞,𝑑 = 𝑠𝑡𝑎𝑐𝑘 𝑤 𝑖 ∈𝑞∩𝑣 { 𝑀 𝑤 1 ,𝑑 , 𝑀 𝑤 2 ,𝑑 , ... , 𝑀 𝑤 𝑖 ,𝑑 ...}.(4)" } ]
2023-11-27
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b8", "b13", "b8", "b13", "b13" ], "table_ref": [], "text": "Multi-attribute group decision-making is a process in which many decision-makers determine the decision-making range around the decision-making goal, and then propose decision-making methods to evaluate, rank and select alternatives [1]. This process is mainly to solve the problems of evaluation and selection, and its theory and methods are widely used in engineering, technology, economy, management and other fields. In this paper, experts generally refer to decision-makers.\nFuzzy multi-attribute group decision-making process can use many different methods, such as ELECTRE [2], PROMETHEE [3], TOPSIS [4] and so on. However, no matter which method we use, we must consider how to deal with data fuzzily, expert weight and attribute weight. Firstly, with the development of economy and society, the decisionmaking problems that people need to solve are becoming more and more complicated. On the one hand, it is difficult to quantify some attributes because of their fuzziness. At this time, decision makers can't get accurate information, and accordingly, they can't make accurate evaluation. On the other hand, even if the attribute can be quantified, it is easy to make the evaluation value inaccurate due to the influence of subjective and objective factors such as the energy of decision makers and the incompleteness of understanding of things. It is not difficult to draw the conclusion that almost all the decision-making processes are related to fuzziness, which makes the problem of fuzzy multi-attribute decision-making aroused widespread concern. Fuzzy multi-attribute decision-making method introduces fuzzy theory into multi-attribute decision-making to improve the scientificity and practicability of decision-making, because it can not only better describe the attributes in alternatives, but also overcome the difficulty of inaccurate evaluation of decision makers caused by subjective and objective factors.\nSecondly, both the determination of expert weight and attribute weight are very important in multi-attribute group decision-making, which has attracted the attention of a large number of scholars, because different weights may lead to different decision-making results. The methods of determining weights are divided into the following three categories: subjective weighting method, objective weighting method and combination of subjective and objective weighting method. The subjective weighting method is to compare the importance of decision makers or attributes and assign them. The advantage of this method is that it can be weighted according to the importance of decision makers or attributes, but it is subjective and adds a lot of manpower and material resources. Among them, the common methods are AHP [5] and Delphi. The objective weighting method is to use objective data to obtain weights, which has the advantages of strong objectivity and strong mathematical theoretical basis. However, it does not consider the subjective intention of decision makers, and it will be inconsistent with the actual situation. Among them, the common shiquanzhang@scu.edu.cn (S. Zhang); huchaolang@scu.edu.cn (C. Hu) methods are entropy weighting method [6] and deviation maximization method. The combination of subjective and objective weighting method is aimed at the advantages and disadvantages of subjective weighting method and objective weighting method, which considers both the subjective intention of decision makers and the internal laws of objective data [7], thus making the results more real and reliable. Common methods include goal programming method. In practical application, we tend to use the combination of subjective and objective weighting method to determine the expert weight and attribute weight, which makes the final decision-making result more credible.\nIn order to solve the increasingly complex decision-making problems more reasonably, many scholars have conducted deeply research on fuzzy multi-attribute group decision-making methods. In the environment of intuitionistic fuzzy set, Sina et al. directly gives the expert weight by subjective weighting method, and determines the attribute weight by combining CRITIC and Ideal Point that are objective weighting method, then it gives the alterative ranking by combining ARAS and EDAS to solve the decision-making problem of entrepreneur construction projects [8]. In the environment of interval-valued intuitionistic fuzzy sets, Ting-Yu directly gives the expert weight by subjective weighting method, and determines the attribute weight by weight optimization model, finally it gives the alterative ranking by TOPSIS method to solve the treatment plan decision-making problem [9]. In the environment of intuitionistic fuzzy set, Shi-fang et al. obtains the expert weights through IFWA operator and aggregates the decision matrices corresponding to multiple experts into a decision matrix, and determines the attribute weights through intuitionistic fuzzy entropy, then finally solves the personnel decision-making problem through GRA [10]. In the environment of intuitionistic fuzzy set, Behnam et al. obtains the expert weight through IFWA operator, then determines the attribute weight through subjective weighting method combined with IFWA operator, and solves the decision-making problem of company updating manufacturing system by combining ELECTRE method [11]. In the environment of interval-valued intuitionistic fuzzy sets, Feifei et al. directly gives the expert weight by subjective weighting method, then determines the attribute weight by continuous weighted entropy, finally solves the evaluation problem of community emergency risk management by TOPSIS [12]. In the environment of interval hesitant fuzzy sets, Gitinavard et al. determines the attribute weight by combining expert empowerment with extended maximum deviation method, and extends IVHF-TOPSIS method to determine the expert weight, then uses the proposed IVHF-MCWR model to solve the location and supplier decision-making problems [13].\nWhen fuzzy multi-attribute group decision-making is used to solve the above problems, the expert weight or attribute weight is determined by subjective weighting method or objective weighting method, which cannot take into account the internal laws of data itself and expert opinions at the same time. Liu et al. put forward a new model of expert weight optimization [14]. When the expert evaluation results are consistent with those of all expert groups, we should give him higher weight. At the same time, we can let decision makers give constraints on the expert weight, which full develops the advantages of subjective and objective weighting methods. However, Ting-Yu directly entrusts the weight of experts [9], thus this paper extends the optimization model of determining the weight of experts proposed by [14] under the environment of interval-valued intuitionistic fuzzy sets and combines it with [9]. So that the weight of experts and attributes will be determined by the optimization model formed by objective data respectively, and at the same time, it can be constrained by expert opinions. Finally, a complete fuzzy multi-attribute group decision-making process is formed by combining TOPSIS.\nThis paper is arranged as follows. In the second part, the related theories of interval intuitionistic fuzzy sets are expounded. The third part explains the extended expert weight determination method [14] and how to determine the attribute weight, as well as the extended TOPSIS [14], and finally develops a complete fuzzy multi-attribute group decision-making method. The fourth part illustrates the effectiveness of this method through a decision-making case, and the fifth part is the summary of this paper." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [ "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b21" ], "table_ref": [], "text": "Fuzzy set is a common method for fuzzy processing. In 1965, Zedah put forward the concept of fuzzy set [15], which provided a solution for people to deal with fuzzy information in decision-making problems.In 1986, Atanassov et al. extended the fuzzy set and put forward the intuitionistic fuzzy set [16]. This theory can simultaneously express the support, opposition and neutrality of the decision-maker in terms of membership, non-membership and hesitation, which can effectively deal with the problem of uncertain decision information. In 1996, Gehrke et al. advanced interval fuzzy sets to solve the problem that it is too strict to use a certain numerical value as the membership degree [17], and its membership degree is in the form of a closed subinterval of an interval. In 2009, Torra et al. proposed the concept of hesitant fuzzy set in order to describe the hesitation in the decision-making process [18], which allows the existence of multiple membership values. In 2012, Zhu et al. combined hesitant fuzzy sets with intuitive fuzzy sets, and proposed dual hesitant fuzzy sets [19]. They added non-membership degree to the hesitant fuzzy set, and allowed many values to appear in the non-membership degree. In 2013, Yager proposed Pythagorean fuzzy sets by adjusting the constraints of membership and non-membership in intuitionistic fuzzy sets [20]. Other types of fuzzy sets, such as interval intuitionistic fuzzy sets [21], are all extended on the basis of the above fuzzy sets.\nThe multi-attribute group decision-making algorithm proposed in this paper is discussed in the environment of interval intuitionistic fuzzy sets, so we will list the related theories of interval intuitionistic fuzzy sets.\nDef 1. 𝑋 be a non-empty set and the interval intuitionistic fuzzy set is as follows:\n𝐴 = { < 𝑥, (𝜇 𝐴 (𝑥), 𝑣 𝐴 (𝑥)) > |𝑥 ∈ 𝑋 } . (1\n)\nWhere 𝜇 𝐴 (𝑥) and 𝑣 𝐴 (𝑥) represent the membership interval and non-membership interval of 𝑥 ∈ 𝑋, respectively. They can be expressed by the interval as:\n𝜇 𝐴 (𝑥) = [𝜇 - 𝐴 (𝑥), 𝜇 + 𝐴 (𝑥)], 𝑣 𝐴 (𝑥) = [𝑣 - 𝐴 (𝑥), 𝑣 + 𝐴 (𝑥)],(2)\nthey satisfy:\n𝜇 𝐴 (𝑥) ⊆ [0, 1], 𝑣 𝐴 (𝑥) ⊆ [0, 1] and 0 ≤ 𝜇 𝐴 (𝑥) + 𝑣 𝐴 (𝑥) ≤ 1. When 𝜇 - 𝐴 (𝑥) = 𝜇 + 𝐴 (𝑥) and 𝑣 - 𝐴 (𝑥) = 𝑣 + 𝐴 (𝑥)\n, interval intuitionistic fuzzy sets degenerate into intuitionistic fuzzy sets. At the same time, for ∀𝑥 ∈ 𝑋 , its hesitation interval can be expressed as:\n𝜋 𝐴 (𝑥) = [𝜋 - 𝐴 (𝑥), 𝜋 + 𝐴 (𝑥)] = [1 -𝜇 + 𝐴 (𝑥) -𝑣 + 𝐴 (𝑥), 1 -𝜇 - 𝐴 (𝑥) -𝑣 - 𝐴 (𝑥)]. (3\n) Def 2. If 𝐴 𝑥 =< 𝜇 𝐴 (𝑥), 𝑣 𝐴 (𝑥) >=< [𝜇 - 𝐴 (𝑥), 𝜇 + 𝐴 (𝑥)], [𝑣 - 𝐴 (𝑥), 𝑣 + 𝐴 (𝑥)] >, 𝐵 𝑥 =< 𝜇 𝐵 (𝑥), 𝑣 𝐵 (𝑥) >=< [𝜇 - 𝐵 (𝑥), 𝜇 + 𝐵 (𝑥)], [𝑣 - 𝐵 (𝑥), 𝑣 + 𝐵 (𝑥)\n] >, are any two interval intuitionistic fuzzy sets, 𝜆 is any real number greater than 0, then there are the following operation rules:\n1.\n𝐴 𝑥 ⊕ 𝐵 𝑥 =< [𝜇 - 𝐴 (𝑥) + 𝜇 - 𝐵 (𝑥) -𝜇 - 𝐴 (𝑥) ⋅ 𝜇 - 𝐵 (𝑥), 𝜇 + 𝐴 (𝑥) + 𝜇 + 𝐵 (𝑥) -𝜇 + 𝐴 (𝑥) ⋅ 𝜇 + 𝐵 (𝑥)], [𝑣 - 𝐴 (𝑥) ⋅ 𝑣 - 𝐵 (𝑥), 𝑣 + 𝐴 (𝑥) ⋅ 𝑣 + 𝐵 (𝑥)] >; 2. 𝐴 𝑥 ⊗ 𝐵 𝑥 =< [𝜇 - 𝐴 (𝑥) ⋅ 𝜇 - 𝐵 (𝑥), 𝜇 + 𝐴 (𝑥) ⋅ 𝜇 + 𝐵 (𝑥)], [𝑣 - 𝐴 (𝑥) + 𝑣 - 𝐵 (𝑥) -𝑣 - 𝐴 (𝑥) ⋅ 𝑣 - 𝐵 (𝑥), 𝑣 + 𝐴 (𝑥) + 𝑣 + 𝐵 (𝑥) -𝑣 + 𝐴 (𝑥) ⋅ 𝑣 + 𝐵 (𝑥)] >; 3. 𝜆 ⋅ 𝐴 𝑥 =< [1 -(1 -𝜇 - 𝐴 (𝑥)) 𝜆 , 1 -(1 -𝜇 + 𝐴 (𝑥)) 𝜆 ], [(𝑣 - 𝐴 (𝑥)) 𝜆 , (𝑣 + 𝐴 (𝑥)) 𝜆 ] >; 4. (𝐴 𝑥 ) 𝜆 =< [(𝜇 - 𝐴 (𝑥)) 𝜆 , (𝜇 + 𝐴 (𝑥)) 𝜆 ], [1 -(1 -𝑣 - 𝐴 (𝑥)) 𝜆 , 1 -(1 -𝑣 + 𝐴 (𝑥)) 𝜆 ] > . Def 3. If 𝐴 𝑥 =< 𝜇 𝐴 (𝑥), 𝑣 𝐴 (𝑥) >=< [𝜇 - 𝐴 (𝑥), 𝜇 + 𝐴 (𝑥)], [𝑣 - 𝐴 (𝑥), 𝑣 + 𝐴 (𝑥)] >, 𝐵 𝑥 =< 𝜇 𝐵 (𝑥), 𝑣 𝐵 (𝑥) >=< [𝜇 - 𝐵 (𝑥), 𝜇 + 𝐵 (𝑥)], [𝑣 - 𝐵 (𝑥), 𝑣 + 𝐵 (𝑥)\n] >, are any two interval intuitionistic fuzzy sets, then the lower bound 𝑝 -(𝐴 𝑥 ⊇ 𝐵 𝑥 ) of the inclusion comparison possibility of 𝐴 𝑥 and 𝐵 𝑥 is defined as [22]:\n𝑝 -(𝐴 𝑥 ⊇ 𝐵 𝑥 ) = 𝑚𝑎𝑥 { 1 -𝑚𝑎𝑥 { (1 -𝑣 - 𝐵 (𝑥)) -𝜇 - 𝐴 (𝑥) (1 -𝜇 - 𝐴 (𝑥) -𝑣 + 𝐴 (𝑥)) + (1 -𝜇 + 𝐵 (𝑥) -𝑣 - 𝐵 (𝑥)) , 0 } , 0 } . (4\n)\nAnd the upper bound 𝑝 + (𝐴 𝑥 ⊇ 𝐵 𝑥 ) of the inclusion comparison possibility of 𝐴 𝑥 and 𝐵 𝑥 is defined as:\n𝑝 + (𝐴 𝑥 ⊇ 𝐵 𝑥 ) = 𝑚𝑎𝑥 { 1 -𝑚𝑎𝑥 { (1 -𝑣 + 𝐵 (𝑥)) -𝜇 + 𝐴 (𝑥) (1 -𝜇 + 𝐴 (𝑥) -𝑣 - 𝐴 (𝑥)) + (1 -𝜇 - 𝐵 (𝑥) -𝑣 + 𝐵 (𝑥)) , 0 } , 0 } . (5\n)\nThen the inclusion comparison possibility 𝑝(𝐴 𝑥 ⊇ 𝐵 𝑥 ) of 𝐴 𝑥 and 𝐵 𝑥 is defined as:\n𝑝(𝐴 𝑥 ⊇ 𝐵 𝑥 ) = 1 2 (𝑝 -(𝐴 𝑥 ⊇ 𝐵 𝑥 ) + 𝑝 + (𝐴 𝑥 ⊇ 𝐵 𝑥 )),(6)\nthat is to say, the possibility that 𝐴 𝑥 is not smaller than 𝐵 𝑥 is 𝑝(𝐴 𝑥 ⊇ 𝐵 𝑥 ). Then 𝑝(𝐴 𝑥 ⊇ 𝐵 𝑥 ) has the following properties:\n1. 0 ≤ 𝑝(𝐴 𝑥 ⊇ 𝐵 𝑥 ) ≤ 1; 2. 𝑝(𝐴 𝑥 ⊇ 𝐵 𝑥 ) + 𝑝(𝐴 𝑥 ⊆ 𝐵 𝑥 ) = 1." }, { "figure_ref": [], "heading": "Improved fuzzy multi-attribute group decision-making method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Fuzzy multi-attribute group decision-making problem", "publication_ref": [ "b8" ], "table_ref": [], "text": "With the increasing complexity of decision-making environment and problems, fuzzy multi-attribute group decision-making methods for solving evaluation and decision-making problems have been widely concerned. Although there are many methods for fuzzy multi-attribute group decision-making, we all have to go through three steps: fuzzification, expert weight and attribute weight determination. When solving the multi-attribute group decisionmaking problem, we might as well make the following assumptions:\n1. 𝑙 decision makers; 2. 𝑚 alternatives; 3. 𝑛 indicators of each alternative.\nExperts need to evaluate each index of each alternative semantically, among which the 𝑘-th decision maker is marked as 𝐷 𝑘 , the 𝑖-th alternative is marked as 𝐴 𝑖 , and the 𝑗-th indicator is marked as 𝑥 𝑗 . The following Figure 1 shows the process of solving the fuzzy multi-attribute group decision-making problem. Specifically, based on the improved TOPSIS method proposed in [9], this paper first fuzzifies the semantic evaluation of the 𝑗-th index of the 𝑖-th alternative by the 𝑘-th decision-maker through the interval intuitionistic fuzzy set, which can be recorded as: 𝐴 𝑘 𝑖𝑗 =< [𝜇 𝑘- 𝑖𝑗 , 𝜇 𝑘+ 𝑖𝑗 ], [𝑣 𝑘- 𝑖𝑗 , 𝑣 𝑘+ 𝑖𝑗 ] >. Secondly, optimization models are established to determine the expert weight and attribute weight, and corresponding constraints can be added according to actual needs. In this way, we can not only give full play to the advantages of objective weighting method that make full use of objective data, but also avoid the disadvantages of not considering the subjective intention of decision makers. Finally, the alternatives can be sorted. Thus, a complete fuzzy multi-attribute group decision-making method is formed, which can be used to help people solve complex multi-attribute group decision-making problems." }, { "figure_ref": [ "fig_1" ], "heading": "Determination of expert weight based on optimization model", "publication_ref": [ "b13", "b13", "b1", "b13" ], "table_ref": [], "text": "Liu et al. points out that different experts have different degrees of experience and knowledge of related fields, therefore the importance of different experts should be different, and we should give higher weight to experts with rich experience and full understanding of decision-making projects [14]. In other words, if the evaluation results of an expert are more consistent with the evaluation results of all experts, the evaluation results of the expert will be more valuable for reference, so we give such experts greater weight. Then we will establish an optimization model based on this and combine the subjective intention of decision makers to restrict it, so that we can get the weight of experts through the combination of subjective and objective methods. Firstly, we assume that the decision matrix corresponding to the 𝑖-th alternative is 𝐴 (𝑖) :\n𝐴 (𝑖) = ( 𝐴 1 (𝑖) 𝐴 2 (𝑖) ⋯ 𝐴 𝑙 (𝑖) ) , (7\n)\nwhere\n𝐴 𝑘 (𝑖) = ( 𝐴 𝑘 𝑖1 𝐴 𝑘 𝑖2 ⋯ 𝐴 𝑘 𝑖𝑛\n) 𝑇 represents the evaluation of the 𝑖-th alternative by the 𝑘-th expert, and we assume that the corresponding weight of experts is 𝒘 = ( 𝑤 1 𝑤 2 ⋯ 𝑤 𝑙 ) 𝑇 . Secondly, each alternative corresponds to a consistent score point, which is obtained by linear combination of 𝑙 experts' evaluations. The interval intuitionistic fuzzy set corresponding to the evaluation of the 𝑗-th indicator of the 𝑖-th alternative by the 𝑘-th decisionmaker is 𝐴 𝑘 𝑖𝑗 , which corresponds to four numbers. In order to be able to use the weight determination model proposed by [14], we split the interval intuitionistic fuzzy set, thus the length of the evaluation column vector of the 𝑘-th decision-maker for the 𝑖-th alternative becomes four times as long as the original one:\n𝐴 𝑘 (𝑖) = ( 𝜇 𝑘- 𝑖1 ⋯ 𝜇 𝑘- 𝑖𝑛 𝜇 𝑘+ 𝑖1 ⋯ 𝜇 𝑘+ 𝑖𝑛 𝑣 𝑘- 𝑖1 ⋯ 𝑣 𝑘- 𝑖𝑛 𝑣 𝑘+ 𝑖1 ⋯ 𝑣 𝑘+ 𝑖𝑛 ) 𝑇 . (8\n)\nThen the consistent score point corresponding to the 𝑖-th alternative can be expressed as:\n𝒃 (𝑖) = 𝑙 ∑ 𝑘=1 𝑤 𝑘 ⋅ 𝐴 𝑘 (𝑖) = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ ∑ 𝑙 𝑘=1 𝑤 𝑘 ⋅ 𝜇 𝑘- 𝑖1 ⋮ ∑ 𝑙 𝑘=1 𝑤 𝑘 ⋅ 𝜇 𝑘- 𝑖𝑛 ∑ 𝑙 𝑘=1 𝑤 𝑘 ⋅ 𝜇 𝑘+ 𝑖1 ⋮ ∑ 𝑙 𝑘=1 𝑤 𝑘 ⋅ 𝜇 𝑘+ 𝑖𝑛 ∑ 𝑙 𝑘=1 𝑤 𝑘 ⋅ 𝑣 𝑘- 𝑖1 ⋮ ∑ 𝑙 𝑘=1 𝑤 𝑘 ⋅ 𝑣 𝑘- 𝑖𝑛 ∑ 𝑙 𝑘=1 𝑤 𝑘 ⋅ 𝑣 𝑘+ 𝑖1 ⋮ ∑ 𝑙 𝑘=1 𝑤 𝑘 ⋅ 𝑣 𝑘+ 𝑖𝑛 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ , (9\n)\nwhich reflects the overall evaluation results of experts on the 𝑖-th alternative. And all alternatives are treated equally, so the decision matrix 𝐴 (𝑖) corresponding to each alternative can be assembled into an overall decision matrix 𝐴 = (\n𝐴 (1) 𝐴 (2) ⋯ 𝐴 (𝑚)\n) 𝑇 when determining the expert weight, where the 𝑘-th column 𝐴 (𝑘) of 𝐴 represents the evaluation results of all alternatives by the 𝑘-th expert. Then the overall consistent score point is 𝒃 = ( 𝒃 (1) 𝒃 (2) ⋯ 𝒃 (𝑚)\n) 𝑇 , thus the distance from the evaluation results of all alternatives by the 𝑘-th expert to the overall consistent score point 𝒃 is:\n𝑑 (𝑘) = ‖ ‖ ‖ 𝐴 (𝑘) -𝒃 ‖ ‖ ‖2 . (10\n)\nFinally, we can establish an optimization model as follows:\nmin 𝒘 𝑄(𝒘) = 𝑙 ∑ 𝑘=1 𝑑 (𝑘) 𝑠.𝑡. { ∑ 𝑙 𝑘=1 𝑤 𝑘 = 1, 0 ≤ 𝑤 𝑘 ≤ 1, 1 ≤ 𝑘 ≤ 𝑙. (11\n)\nThrough the optimization model, we can understand that if the distance between the evaluation result of an expert and the overall consistent score is closer, we will give such an expert greater weight. For example, as shown in Figure 2, after the above-mentioned optimization model processing, the obtained expert weights are ranked as 𝑤 4 > 𝑤 2 > 𝑤 1 > 𝑤 5 > 𝑤 3 . At the same time, Liu et al. also proves that the optimization model has a unique solution [14], and we can attach more constraints to the optimization model, such as giving the highest weight to authoritative experts, and the optimization model still has a unique solution. And numerical experiments show that the closer the expert's evaluation result is to the consistent score point, the higher the weight he will get." }, { "figure_ref": [], "heading": "Improved TOPSIS method", "publication_ref": [ "b8", "b13", "b8", "b8", "b22", "b23" ], "table_ref": [], "text": "An improved TOPSIS method is proposed to solve the problem of multi-attribute group decision-making by [9]. In this paper, the weight of experts is given directly, but its acquisition method is not explained. In the last section, we can calculate the expert weight by extending and using the method proposed by [14]. Therefore, after getting the expert weight, we can solve the multi-attribute group decision-making problem completely by combining with the improved TOPSIS proposed by [9]. The specific process is as follows. \n)12\nSecondly, according to [9], the optimal membership degree 𝑝(𝐴\n⋅ 𝑘 𝑖𝑗 ) of 𝐴 ⋅ 𝑘 𝑖𝑗\ncan be calculated according to the following formula:\n𝑝(𝐴 ⋅ 𝑘 𝑖𝑗 ) = 1 𝑙(𝑙 -1) ( 𝑙 ∑ 𝑘 ′ =1 𝑝(𝐴 ⋅ 𝑘 𝑖𝑗 ⊇ 𝐴 ⋅ 𝑘 ′ 𝑖𝑗 ) + 𝑙 2 -1).(13)\nAccording to the above calculation, the interval intuitionistic fuzzy order weighted average (IIOWA) operator [23,24] can be extended to get the comprehensive decision matrix 𝐷. The specific steps are as follows:\n1 \n𝜏 𝑘 = 𝑒 -((𝑘-𝑢 𝑙 ) 2 ∕2⋅𝑡 2 𝑙 ) ∑ 𝑙 𝑘 ′ =1 𝑒 -((𝑘 ′ -𝑢 𝑙 ) 2 ∕2⋅𝑡 2 𝑙 ) , (14\n)\nwhere 𝑢 𝑙 is the average value of 1, 2, ⋯ , 𝑙 and 𝑡 𝑙 is the corresponding standard deviation. 3. Using IIOWA operator to calculate the element 𝐴 𝑖𝑗 in row 𝑖 and column 𝑗 of the comprehensive decision matrix 𝐷:\n𝐴 𝑖𝑗 =< [1 - 1 ∏ 𝑘=1 (1 -𝜇 ⋅ 𝜎(𝑘)- 𝑖𝑗 ) 𝜏 𝑘 , 1 - 1 ∏ 𝑘=1 𝜇 ⋅ 𝜎(𝑘)+ 𝑖𝑗 ) 𝜏 𝑘 ], [ 1 ∏ 𝑘=1 (𝑣 ⋅ 𝜎(𝑘)- 𝑖𝑗 ) 𝜏 𝑘 , 1 ∏ 𝑘=1 (𝑣 ⋅ 𝜎(𝑘)+ 𝑖𝑗 ) 𝜏 𝑘 ] >,\nwhich represents the comprehensive evaluation of the 𝑗-th indicator of the 𝑖-th alternative by all experts, and is abbreviated as 𝐴 𝑖𝑗 =< [𝜇 - 𝑖𝑗 , 𝜇 + 𝑖𝑗 ], [𝑣 - 𝑖𝑗 , 𝑣 + 𝑖𝑗 ] >. The comprehensive decision matrix 𝐷 obtained by the above two steps can not only reflect the importance of different experts, but also reflect the consistency of all experts' evaluation. Then, according to the comprehensive decision matrix 𝐷, the positive and negative ideal solutions of interval intuitionistic fuzzy are found:\n𝐴 + = { < 𝑥 𝑗 , ([𝜇 - +𝑗 , 𝜇 + +𝑗 ], [𝑣 - +𝑗 , 𝑣 + +𝑗 ]) > |𝑥 𝑗 ∈ 𝑋, 𝑗 = 1, 2, ⋯ , 𝑛 } , (15\n)\n𝐴 -= { < 𝑥 𝑗 , ([𝜇 - -𝑗 , 𝜇 + -𝑗 ], [𝑣 - -𝑗 , 𝑣 + -𝑗 ]) > |𝑥 𝑗 ∈ 𝑋, 𝑗 = 1, 2, ⋯ , 𝑛 } , (16\n)\nwhere \n[\n𝐶𝐶(𝐴 𝑖 ) = 𝑛 ∑ 𝑗=1 𝑝((𝐴 𝑖𝑗 ⊇ 𝐴 -𝑗 |𝑥 𝑗 ∈ 𝑋 𝑏 ), (𝐴 -𝑗 ⊇ 𝐴 𝑖𝑗 |𝑥 𝑗 ∈ 𝑋 𝑐 ))𝑤 𝑗 ⋅ { 𝑛 ∑ 𝑗=1 [(𝑝(𝐴 +𝑗 ⊇ 𝐴 𝑖𝑗 ) + 𝑝(𝐴 𝑖𝑗 ⊇ 𝐴 -𝑗 )|𝑥 𝑗 ∈ 𝑋 𝑏 ), (𝑝(𝐴 𝑖𝑗 ⊇ 𝐴 +𝑗 ) + 𝑝(𝐴 -𝑗 ⊇ 𝐴 𝑖𝑗 )|𝑥 𝑗 ∈ 𝑋 𝑐 )]𝑤 𝑗 } -1 , (17\n)\nwhere 0 ≤ 𝐶𝐶(𝐴 𝑖 ) ≤ 1(𝑖 = 1, 2, ⋯ , 𝑚). For the 𝑗-th indicator of the 𝑖-th alternative, if it belongs to the benefit indicator, the inclusion comparison possibility 𝑝(𝐴 𝑖𝑗 ⊇ 𝐴 -𝑗 ) with 𝐴 𝑖𝑗 not less than 𝐴 -𝑗 and the inclusion comparison possibility 𝑝(𝐴 +𝑗 ⊇ 𝐴 𝑖𝑗 ) with 𝐴 𝑖𝑗 not greater than 𝐴 +𝑗 are calculated. At this time, if there is a higher possibility that 𝐴 𝑖𝑗 is better than 𝐴 +𝑗 and a lower possibility that 𝐴 𝑖𝑗 is worse than 𝐴 -𝑗 , then the 𝑗-th indicator of the 𝑖-th alternative has a good performance. And the same is true for the cost indicator. Therefore, we can sort the closeness 𝐶𝐶(𝐴 𝑖 ) and choose the alternative with the largest index value as the optimal alternative." }, { "figure_ref": [], "heading": "Determination method of attribute weight", "publication_ref": [ "b13", "b8", "b8", "b8", "b24" ], "table_ref": [], "text": "In the first two parts, the problem of multi-attribute group decision-making can be solved by combining the method of determining expert weights proposed by [14] with the improved TOPSIS method in [9], but how to calculate attribute weights is not explained. When the attribute weight is unknown, Ting-Yu also suggests that an optimization model can be established by combining subjective and objective weighting methods to determine the attribute weight [9]. The specific methods are as follows.\nIn the previous part, we finally choose the alternative through the improved closeness 𝐶𝐶(𝐴 𝑖 ). When the attribute weights are unknown, Ting-Yu established the following optimization model [9]:\nmax { 𝐶𝐶(𝐴 1 ), 𝐶𝐶(𝐴 2 ), ⋯ , 𝐶𝐶(𝐴 𝑚 ) } 𝑠.𝑡. { ∑ 𝑛 𝑗=1 𝑤 𝑗 = 1, 𝑤 𝑗 ≥ 0, 𝑗 = 1, 2, ⋯ , 𝑛. (18\n)\nAt this time, the above multi-objective optimization model is transformed into the following single-objective optimization model by using the max-min operator proposed in [25]:\nmax 𝜗 𝑠.𝑡. { 𝐶𝐶(𝐴 𝑖 ) ≥ 𝜗, 𝑖 = 1, 2, ⋯ , 𝑚, ( 𝑤 1 𝑤 2 ⋯ 𝑤 𝑛 ) ∈ Γ 0 . (19\n)\nWhere\nΓ 0 = { ( 𝑤 1 𝑤 2 ⋯ 𝑤 𝑛 ) | 𝑛 ∑ 𝑗=1 𝑤 𝑗 = 1, 𝑤 𝑗 ≥ 0, 𝑗 = 1, 2, ⋯ , 𝑛 } , (20\n)\nand attribute weights can be obtained by solving the above optimization model.\nIn practical application, experts can limit the attribute weight according to their own experience and professional knowledge, which can be divided into five forms: weak ranking, strict ranking, ranking difference, interval boundary and proportional boundary. However, the opinions of experts are almost impossible to be completely unified, so the following non-negative deviation variables:\n𝑒 - (1)𝑗 1 𝑗 2 , 𝑒 - (2)𝑗 1 𝑗 2 , 𝑒 - (3)𝑗 1 𝑗 2 𝑗 3 , 𝑒 - (4)𝑗 1 , 𝑒 + (4)𝑗 1 , 𝑒 - (5)𝑗 1 𝑗 2 (𝑗 1 ≠ 𝑗 2 ≠ 𝑗 3 ),(21)\nwhich can be added to the five types of constraints to become the relaxed five types of constraints.\n1. Relaxed weak ranking:\nΓ 1 = { ( 𝑤 1 𝑤 2 ⋯ 𝑤 𝑛 ) ∈ Γ 0 |𝑤 𝑗 1 + 𝑒 - (1)𝑗 1 𝑗 2 ≥ 𝑤 𝑗 2 , 𝑗 1 ∈ Υ 1 , 𝑗 2 ∈ Λ 1 } ,\nwhere Υ 1 and Λ 1 are two disjoint subsets in index set 𝑁 = {1, 2, ⋯ , 𝑛}. 2. Relaxed strict ranking:\nΓ 2 = { ( 𝑤 1 𝑤 2 ⋯ 𝑤 𝑛 ) ∈ Γ 0 |𝑤 𝑗 1 -𝑤 𝑗 2 + 𝑒 - (2)𝑗 1 𝑗 2 ≥ 𝛿 ′ 𝑗 1 𝑗 2 , 𝑗 1 ∈ Υ 2 , 𝑗 2 ∈ Λ 2 } ,\nwhere\n𝛿 ′ 𝑗 1 𝑗 2 is a constant and 𝛿 ′ 𝑗 1 𝑗 2\n≥ 0, Υ 2 and Λ 2 are two disjoint subsets in index set 𝑁. 3. Relaxed ranking of differences:\nΓ 3 = { ( 𝑤 1 𝑤 2 ⋯ 𝑤 𝑛 ) ∈ Γ 0 |𝑤 𝑗 1 -2𝑤 𝑗 2 + 𝑤 𝑗 3 + 𝑒 - (3)𝑗 1 𝑗 2 𝑗 3 ≥ 0, 𝑗 1 ∈ Υ 3 , 𝑗 2 ∈ Λ 3 , 𝑗 3 ∈ Ω 3 } ,\nwhere Υ 3 , Λ 3 and Ω 3 are three disjoint subsets in index set 𝑁. 4. Relaxed interval boundary:\nΓ 4 = { ( 𝑤 1 𝑤 2 ⋯ 𝑤 𝑛 ) ∈ Γ 0 |𝑤 𝑗 1 + 𝑒 - (4)𝑗 1 ≥ 𝛿 𝑗 1 , 𝑤 𝑗 1 -𝑒 + (4)𝑗 1 ≤ 𝛿 𝑗 1 + 𝜀 𝑗 1 , 𝑗 1 ∈ Υ 4 } ,\nwhere 𝛿 𝑗 1 and 𝜀 𝑗 1 are constans, which satisfy\n𝛿 𝑗 1 ≥ 0, 𝜀 𝑗 1 ≥ 0, 0 ≤ 𝛿 𝑗 1 ≤ 𝛿 𝑗 1 + 𝜀 𝑗 1 ≤ 1, Υ 4 is a subset in index set 𝑁." }, { "figure_ref": [], "heading": "Relaxed proportional boundary:", "publication_ref": [ "b24" ], "table_ref": [], "text": "Γ 5 = { ( 𝑤 1 𝑤 2 ⋯ 𝑤 𝑛 ) ∈ Γ 0 | 𝑤 𝑗 1 𝑤 𝑗 2 + 𝑒 - (5)𝑗 1 𝑗 2 ≥ 𝛿 ′′ 𝑗 1 𝑗 2 , 𝑗 1 ∈ Υ 5 , 𝑗 2 ∈ Λ 5 } , where 𝛿 ′′ 𝑗 1 𝑗 2 is a constant and 0 ≤ 𝛿 ′ 𝑗 1 𝑗 2\n≤ 1, Υ 5 and Λ 5 are two disjoint subsets in index set 𝑁.\nAnd let Γ be the sum of the five kinds of relaxed constraints:\nΓ = Γ 1 ∪ Γ 2 ∪ Γ 3 ∪ Γ 4 ∪ Γ 5 .\nObviously, in the process of restricting attribute weights by experts, the less opinions experts have, the more favorable it is to determine the final attribute weights. In other words, we hope that these non-negative deviation variables are small enough. Combined with the above optimization model, a new optimization model can be established:\nmax { 𝐶𝐶(𝐴 1 ), 𝐶𝐶(𝐴 2 ), ⋯ , 𝐶𝐶(𝐴 𝑚 ) } min { ∑ 𝑗 1 ,𝑗 2 ,𝑗 3 ∈𝑁 (𝑒 - (1)𝑗 1 𝑗 2 + 𝑒 - (2)𝑗 1 𝑗 2 + 𝑒 - (3)𝑗 1 𝑗 2 𝑗 3 + 𝑒 - (4)𝑗 1 + 𝑒 + (4)𝑗 1 + 𝑒 - (5)𝑗 1 𝑗 2 ) } 𝑠.𝑡. ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ ( 𝑤 1 𝑤 2 ⋯ 𝑤 𝑛 ) ∈ Γ. 𝑒 - (1)𝑗 1 𝑗 2 ≥ 0, 𝑗 1 ∈ Υ 1 , 𝑗 2 ∈ Λ 1 . 𝑒 - (2)𝑗 1 𝑗 2 ≥ 0, 𝑗 1 ∈ Υ 2 , 𝑗 2 ∈ Λ 2 . 𝑒 - (3)𝑗 1 𝑗 2 𝑗 3 ≥ 0, 𝑗 1 ∈ Υ 3 , 𝑗 2 ∈ Λ 3 , 𝑗 3 ∈ Ω 3 . 𝑒 - (4)𝑗 1 ≥ 0, 𝑒 + (4)𝑗 1 ≥ 0, 𝑗 1 ∈ Υ 4 . 𝑒 - (5)𝑗 1 𝑗 2 ≥ 0, 𝑗 1 ∈ Υ 5 , 𝑗 2 ∈ Λ 5 .(22)\nIn order to facilitate the solution, the above model can be transformed into a single objective optimization model [25]:\nmax 𝜗 𝑠.𝑡. ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ 𝐶𝐶(𝐴 𝑖 ) ≥ 𝜗, 𝑖 = 1, 2, ⋯ , 𝑚, - ∑ 𝑗 1 ,𝑗 2 ,𝑗 3 ∈𝑁 (𝑒 - (1)𝑗 1 𝑗 2 + 𝑒 - (2)𝑗 1 𝑗 2 + 𝑒 - (3)𝑗 1 𝑗 2 𝑗 3 + 𝑒 - (4)𝑗 1 + 𝑒 + (4)𝑗 1 + 𝑒 - (5)𝑗 1 𝑗 2 ) ≥ 𝜗, ( 𝑤 1 𝑤 2 ⋯ 𝑤 𝑛 ) ∈ Γ. 𝑒 - (1)𝑗 1 𝑗 2 ≥ 0, 𝑗 1 ∈ Υ 1 , 𝑗 2 ∈ Λ 1 . 𝑒 - (2)𝑗 1 𝑗 2 ≥ 0, 𝑗 1 ∈ Υ 2 , 𝑗 2 ∈ Λ 2 . 𝑒 - (3)𝑗 1 𝑗 2 𝑗 3 ≥ 0, 𝑗 1 ∈ Υ 3 , 𝑗 2 ∈ Λ 3 , 𝑗 3 ∈ Ω 3 . 𝑒 - (4)𝑗 1 ≥ 0, 𝑒 + (4)𝑗 1 ≥ 0, 𝑗 1 ∈ Υ 4 . 𝑒 - (5)𝑗 1 𝑗 2 ≥ 0, 𝑗 1 ∈ Υ 5 , 𝑗 2 ∈ Λ 5 . (23\n)\nBy combining subjective and objective methods to establish this optimization model, we not only consider the objective information contained in each index, but also consider the subjective opinions of experts. Therefore, the attribute weights with practical significance can be obtained to solve the multi-attribute group decision-making problem. At the same time, we can also give some other constraints to limit the attribute weight, such as the requirements of the decision-making project itself." }, { "figure_ref": [], "heading": "The complete algorithm", "publication_ref": [ "b10" ], "table_ref": [], "text": "In this paper, interval intuitionistic fuzzy sets, optimization model for determining expert weights, improved TOP-SIS decision-making method and optimization model for determining attribute weights are introduced respectively. This paper combines them for the first time, which can completely solve the multi-attribute group decision-making problem and help us make the final decision. The framework of this paper is shown in Figure 3. , where 𝐴 (𝑘) is the evaluation result of all alternatives by the 𝑘-th expert (𝑘 = 1, 2, ⋯ , 𝑙) and 𝒃 is the overall consistent score point; 3: Establish the optimization model (11) " }, { "figure_ref": [], "heading": "Case study", "publication_ref": [ "b8", "b8", "b10", "b16", "b8", "b13", "b8" ], "table_ref": [ "tab_4", "tab_5", "tab_4", "tab_6" ], "text": "This section will solve a multi-attribute group decision-making problem by using the method proposed above. We will use the same case as [9] and compare the final results. This is a decision-making problem about the treatment of basilar artery occlusion in an 82-year-old solitary patient with hypertension. Her two sons and one daughter { 𝐷 1 , 𝐷 } are the cost indicators, and the final decision will be obtained through the above algorithm. Table 1 shows the interval intuitionistic fuzzy sets corresponding to different linguistic evaluation. The algorithm proposed in this paper can be used to make the decision:\nStep 1. Collect the linguistic evaluation of four alternatives by three decision makers from five indicators, as shown in Table 2, and convert them into interval intuitionistic fuzzy sets according to Table 1.\nStep 2. By splitting the interval intuitionistic fuzzy sets, the length of the evaluation column vector of the 𝑘-th decision-maker for the 𝑖-th alternative will be four times as long as the original one. And calculate the consistent score point 𝒃 (𝑖) corresponding to the 𝑖-th alternative from the linear combination of 𝑙 experts' evaluations. Then, assemble the evaluation results of all alternatives and calculate the distance from the evaluation results 𝐴 𝑘 of the 𝑘-th expert to the overall consistent score point 𝒃. \n𝐷 3 𝐴 1 𝑥 1 VH VH H 𝑥 2 M M L 𝑥 3 M M H 𝑥 4 M M L 𝑥 5 M L M 𝐴 2 𝑥 1 H H VH 𝑥 2 M H M 𝑥 3 VH H VH 𝑥 4 VH VH H 𝑥 5 L L VL 𝐴 3 𝑥 1 M L M 𝑥 2 L L M 𝑥 3 VL VL L 𝑥 4 L VL VL 𝑥 5 H VH VH 𝐴 4 𝑥 1 M H H 𝑥 2 VL M L 𝑥 3 L M L 𝑥 4 M H L 𝑥 5 VH H VH 𝐴 5 𝑥 1 VH VH H 𝑥 2 M M L 𝑥 3 M M H 𝑥 4 M M L 𝑥 5 M L M\nStep 3. Establish the expert weight optimization model:\nmin 𝒘 𝑄(𝒘) = 3 ∑ 𝑘=1 𝑑 (𝑘) 𝑠.𝑡. { ∑ 3 𝑘=1 𝑤 𝑘 = 1, 0 ≤ 𝑤 𝑘 ≤ 1, 1 ≤ 𝑘 ≤ 3.\nIt can be obtained that the weight of three decision makers is 𝒘 = ( 0.45456 0.26647 0.27897 ) 𝑇 , which is different from that given directly as ( 0.40 0.35 0.25 ) 𝑇 in [9]. Table 3 shows the weights of the three decision makers and the distance to the overall consistent score point. Through observation, it is found that the greater the weight of the decision maker, the closer his evaluation is to the overall consistent score point, which is in line with our goal of establishing the optimization model (11). Step 6. Use ( 15) and ( 16) to find the positive and negative ideal solutions of interval intuitionistic fuzzy corresponding to the comprehensive decision matrix 𝐷, and calculate the closeness of each alternative through (17).\nStep 7. Establish the attribute weight optimization model: Step 8. Substitute the attribute weights obtained in step 7 into the closeness index to obtain: 𝐶𝐶(𝐴 1 ) = 0.5575228, 𝐶𝐶(𝐴 2 ) = 0.5686608, 𝐶𝐶(𝐴 3 ) = 0.4058395, 𝐶𝐶(𝐴 4 ) = 0.4583039. According to the closeness, the ranking of each alternative is 𝐴 2 > 𝐴 1 > 𝐴 4 > 𝐴 3 , thus the optimal alternative is 𝐴 2 .\nmax 𝜗 𝑠.𝑡. ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ 0.\nCompared with [9], the improvement of this paper is combined with [14]. The expert weight obtained by establishing the optimization model through subjective and objective weighting method is more convincing than giving the expert weight directly through subjective weighting method. And from the same case, the closeness obtained in [9] is: 𝐴 2 > 𝐴 1 > 𝐴 4 > 𝐴 3 . Although it is slightly different from the closeness calculated in this paper, the final ranking of the closeness of each alternative is consistent, and the optimal alternative is 𝐴 2 ." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b13", "b8", "b8", "b8", "b13" ], "table_ref": [], "text": "In the context of interval-valued intuitionistic fuzzy sets, this paper extends the optimization model for determining expert weights proposed in [14] and combines it with the method proposed in [9]. In this way, the determination of expert weight and attribute weight gives full play to the advantages of subjective and objective weighting methods. By combining with TOPSIS, a complete fuzzy multi-attribute group decision-making method is formed. The feasibility of the method proposed in this paper is verified by calculating the decision-making problem of treatment scheme in [9]. And compared the results of this paper with those of [9], it is found that the final decision-making results are consistent, which verifies the effectiveness of the proposed method of this paper. Our next research direction is to continue to improve the optimization model for determining expert weights proposed in [14], so as to be used in other multi-attribute group decision-making methods to improve the decision-making effect." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work is supported by the \"Key Research Program\" (Grant No.2022YFC3801300), the Ministry of Science and Technology, PRC. And we would like to deliver thanks to Professor Han Huilei and Professor Huang Li for their assistance." } ]
method multi-attribute group decision-making optimization models
A new fuzzy multi-attribute group decision-making method based on TOPSIS and optimization models
[ { "figure_caption": "Figure 1 :1Figure 1: Fuzzy multi-attribute group decision-making problem", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :=2Figure 2: Relationship between weight and distance", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :Algorithm 131Figure 3: Frame diagram", "figure_data": "", "figure_id": "fig_2", "figure_label": "31", "figure_type": "figure" }, { "figure_caption": "𝑋 𝑏 represents the benefit indicator, and 𝑋 𝑐 represents the cost indicator in the indicator. Finally, assuming that the attribute weight is 𝒘 = ( 𝑤 1 𝑤 2 ⋯ 𝑤 𝑛 ) 𝑇 , we can improve the closeness index of the 𝑖-th alternative to be 𝐶𝐶(𝐴 𝑖 ):", "figure_data": "𝜇 -+𝑗 , 𝜇 + +𝑗 ] = [((max𝑖𝜇 + 𝑖𝑗 |𝑥 𝑗 ∈ 𝑋 𝑐 ))],[𝑣 -+𝑗 , 𝑣 + +𝑗 ] = [((min𝑖𝑣 + 𝑖𝑗 |𝑥 𝑗 ∈ 𝑋 𝑐 ))],[𝜇 --𝑗 , 𝜇 + -𝑗 ] = [((min𝑖𝜇 + 𝑖𝑗 |𝑥 𝑗 ∈ 𝑋 𝑐 ))],[𝑣 --𝑗 , 𝑣 + -𝑗 ] = [((max 𝑖𝑣 -𝑖𝑗 |𝑥 𝑗 ∈ 𝑋 𝑏 ), (min 𝑖𝑣 -𝑖𝑗 |𝑥 𝑗 ∈ 𝑋 𝑐 )), ((max 𝑖𝑣 + 𝑖𝑗 |𝑥 𝑗 ∈ 𝑋 𝑏 ), (min 𝑖𝑣 + 𝑖𝑗 |𝑥 𝑗 ∈ 𝑋 𝑐 ))].Also,", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The comprehensive decision matrix 𝐷 is obtained by using the extension IIOWA operator; 6: Find the positive and negative ideal solutions, and assume that the attribute weight is 𝒘 = ( 𝑤 1 𝑤 2 ⋯ 𝑤 𝑛 ) 𝑇 to get the closeness index 𝐶𝐶(𝐴 𝑖 ) of the 𝑖-th alternative;7: Establish the optimization model(23) and get the attribute weight 𝒘 = ( 𝑤 1 𝑤 2 ⋯ 𝑤 𝑛 ) 𝑇 ; 8: According to the closeness index 𝐶𝐶(𝐴 𝑖 ), 𝑚 alternatives can be ranked.", "figure_data": "and get the expert weight 𝒘 =(𝑤 1 𝑤 2 ⋯ 𝑤 𝑙) 𝑇 ;4: Weighting the evaluation 𝐴 𝑘 𝑖𝑗 to get 𝐴 ⋅ 𝑘 𝑖𝑗, and calculating the optimal membership 𝑝(𝐴 ⋅ 𝑘 𝑖𝑗) of 𝐴 𝑖𝑗 ⋅ 𝑘;5:", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "2 , 𝐷 3 } will make the linguistic evaluation of four alternatives { 𝐴 1 , 𝐴 2 , 𝐴 3 , 𝐴 4 } : intravenous thrombolysis, intra-arterial thrombolysis, antiplatelet therapy and heparinization from five indicators { 𝑥 1 , 𝑥 2 , 𝑥 3 , 𝑥 4 , 𝑥 5", "figure_data": "}: survivalrate, severity of complications, possibility of cure, cost and possibility of recurrence, where{ 𝑥 1 , 𝑥 3}are the benefitindicators and{ 𝑥 2 , 𝑥 4 , 𝑥 5", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Linguistic evaluation and corresponding interval intuitionistic fuzzy sets", "figure_data": "Linguistic evaluationInterval intuitionistic fuzzy setsVery high(VH)< [0.75, 0.95], [0.00, 0.05] >High(H)< [0.50, 0.70], [0.05, 0.25] >Medium(M)< [0.30, 0.50], [0.20, 0.40] >Low(L)< [0.05, 0.25], [0.50, 0.70] >Very low(VL)< [0.00, 0.05], [0.75, 0.95] >", "figure_id": "tab_4", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Linguistic evaluationAlternativesIndicatorsDecision makers𝐷 1𝐷 2", "figure_id": "tab_5", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Expert weight and distance", "figure_data": "Decision maker𝐷 1𝐷 2𝐷 3Weight0.454560.266470.27897Distance to overall consistent score point 𝒃0.713261.216821.16203Step 4. Through the expert weights obtained above, the evaluation 𝐴 𝑘 𝑖𝑗 is weighted by (12) to get 𝐴 𝑖𝑗 ⋅ 𝑘, and theoptimal membership 𝑝(𝐴 ⋅ 𝑘 𝑖𝑗) of 𝐴 ⋅ 𝑘 𝑖𝑗is calculated by (13). For example, 𝐴 11 ⋅ 1=< [0.8490, 0.9832], [0, 0.0168] >,𝑝(𝐴 ⋅ 1 11⊇ 𝐴 ⋅ 2 11) = 0.6650, 𝑝(𝐴 ⋅ 1 11⊇ 𝐴 ⋅ 3 11) = 0.9168 and 𝑝(𝐴 11 ⋅ 1) = 0.4303.Step 5. A comprehensive decision matrix 𝐷 (whose weighted vector is 𝜏 =(0.2429 0.5142 0.2429) 𝑇 ) isobtained by using the extension IIOWA:𝐷 =⎛ ⎜< [0.6896, 0.9153], [0, 0.0816] > ⋮⋯ < [0.2454, 0.4422], [0.2565, 0.4644] > ⋱ ⋮⎞ ⎟.⎜ ⎝ < [0.4088, 0.6189], [0.0983, 0.3668] > ⋯< [0.6959, 0.9192], [0, 0.0780] >⎟ ⎠", "figure_id": "tab_6", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "9492𝑤 1 +0.6887𝑤 2 +𝑤 3 +0.9553𝑤 4 +0.9549𝑤 5 1.4492𝑤 1 +1.4718𝑤 2 +1.8308𝑤 3 +1.8609𝑤 4 +1.7735𝑤 5 𝑒 - (𝑖)14 ≥ 𝑤 4 , 𝑤 5 -𝑤 2 + 𝑒 - (𝑖𝑖)52 ≥ 0.04, 𝑤 3 -2𝑤 2 + 𝑤 4 + 𝑒 - (𝑖𝑖𝑖)324 ≥ 0, 𝑤 4 + 𝑒 - (𝑖𝑣)4 ≥ 0.08, 𝑤 4 -𝑒 + (𝑖𝑣)4 ≤ 0.15, 𝑤 1 + 𝑤 2 + 𝑤 3 + 𝑤 4 + 𝑤 5 = 1, 𝑤 𝑗 ≥ 0, 𝑗 = 1, 2, ⋯ , 5.", "figure_data": "≥ 𝜗,0.8645𝑤 1 +0.5𝑤 2 +𝑤 3 +0.5𝑤 4 +𝑤 5 1.4894𝑤 1 +1.4165𝑤 2 +1.5𝑤 3 +1.5𝑤 4 +1.5𝑤 5≥ 𝜗,0.5𝑤 1 +0.7906𝑤 2 +0.5𝑤 3 +𝑤 4 +0.5429𝑤 5 1.4492𝑤 1 +1.4450𝑤 2 +1.5𝑤 3 +1.5𝑤 4 +1.5429𝑤 5≥ 𝜗,0.6817𝑤 1 +0.8817𝑤 2 +0.7866𝑤 3 +0.8495𝑤 4 +0.5𝑤 5 1.4745𝑤 1 +1.4442𝑤 2 +1.7866𝑤 3 +1.8495𝑤 4 +1.5𝑤 5≥ 𝜗,(𝑒 -(𝑖)14 + 𝑒 -(𝑖𝑖)52 + 𝑒 -(𝑖𝑖𝑖)324 + 𝑒 -(𝑖𝑣)4 + 𝑒 + (𝑖𝑣)4 + 𝑒 -(𝑣)23 ) ≥ 𝜗,𝑤 1 + 𝑤 2 𝑤 3 (𝑒 -(𝑖)14 ≥ 0, 𝑒 -(𝑖𝑖)52 ≥ 0, 𝑒 -(𝑖𝑖𝑖)324 ≥ 0, 𝑒 -(𝑖𝑣)4 ≥ 0, 𝑒 + + 𝑒 -(𝑣)23 ≥ 0.4, (𝑖𝑣)4 ≥ 0, 𝑒 -(𝑣)23 ) ≥ 0,Then the weight of five attributes is 𝒘 =(0.2234 0.1659 0.2245 0.1074 0.2787) 𝑇 .", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" } ]
Qixiao Hu; Shiquan Zhang; Chaolang Hu; Yuetong Liu
[ { "authors": "Ching-Lai Hwang; Ming-Jeng Lin", "journal": "Springer Science & Business Media", "ref_id": "b0", "title": "Group decision making under multiple criteria: methods and applications", "year": "2012" }, { "authors": "Murat Kirişci; Ibrahim Demir; Necip Şimşek", "journal": "Artificial Intelligence in Medicine", "ref_id": "b1", "title": "Fermatean fuzzy electre multi-criteria group decision-making and most suitable biomedical material selection", "year": "2022" }, { "authors": "Muhammad Akram; Ahmad N Al-Kenani Shumaiza", "journal": "Symmetry", "ref_id": "b2", "title": "Multi-criteria group decision-making for selection of green suppliers under bipolar fuzzy promethee process", "year": "2020" }, { "authors": "Serkan Fatih Emre Boran; Mustafa Genç; Diyar Kurt; Akay", "journal": "Expert systems with applications", "ref_id": "b3", "title": "A multi-criteria intuitionistic fuzzy group decision making for supplier selection with topsis method", "year": "2009" }, { "authors": "Zeshui Xu", "journal": "European journal of operational research", "ref_id": "b4", "title": "On consistency of the weighted geometric mean complex judgement matrix in ahp", "year": "2000" }, { "authors": "Xiangxin Li; Kongsen Wang; Liwen Liu; Jing Xin; Hongrui Yang; Chengyao Gao", "journal": "Procedia engineering", "ref_id": "b5", "title": "Application of the entropy weight and topsis method in safety evaluation of coal mines", "year": "2011" }, { "authors": "Huaxin Song; Baiyi Lu; Chunhui Ye; Jie Li; Zhiwei Zhu; Lufei Zheng", "journal": "Food Research International", "ref_id": "b6", "title": "Fraud vulnerability quantitative assessment of wuchang rice industrial chain in china based on ahp-ewm and ann methods", "year": "2021" }, { "authors": "Sina Salimian; Seyed Meysam Mousavi; Laura Tupenaite; Jurgita Antucheviciene", "journal": "Buildings", "ref_id": "b7", "title": "An integrated multi-criteria decision model to select sustainable construction projects under intuitionistic fuzzy conditions", "year": "2023" }, { "authors": "Ting-Yu Chen", "journal": "Applied Soft Computing", "ref_id": "b8", "title": "The inclusion-based topsis method with interval-valued intuitionistic fuzzy sets for multiple criteria group decision making", "year": "2015" }, { "authors": "Shi-Fang Zhang; San-Yang Liu", "journal": "Expert Systems with Applications", "ref_id": "b9", "title": "A gra-based intuitionistic fuzzy multi-criteria group decision making method for personnel selection", "year": "2011" }, { "authors": "Behnam Vahdani; , S Meysam Mousavi; R Tavakkoli-Moghaddam; Hashemi", "journal": "Applied Mathematical Modelling", "ref_id": "b10", "title": "A new design of the elimination and choice translating reality method for multi-criteria group decision-making in an intuitionistic fuzzy environment", "year": "2013" }, { "authors": "Feifei Jin; Lidan Pei; Huayou Chen; Ligang Zhou", "journal": "Knowledge-Based Systems", "ref_id": "b11", "title": "Interval-valued intuitionistic fuzzy continuous weighted entropy and its application to multi-criteria fuzzy group decision making", "year": "2014" }, { "authors": "Hossein Gitinavard; S Meysam Mousavi; Behnam Vahdani", "journal": "Neural Computing and Applications", "ref_id": "b12", "title": "A new multi-criteria weighting and ranking model for group decision-making analysis based on interval-valued hesitant fuzzy sets to selection problems", "year": "2016" }, { "authors": "Yuetong Liu; Chaolang Hu; Shiquan Zhang; Qixiao Hu", "journal": "", "ref_id": "b13", "title": "A new approach to the determination of expert weights in multi-attribute group decision making", "year": "2023" }, { "authors": "A Lotfi; Zadeh", "journal": "Information and control", "ref_id": "b14", "title": "Fuzzy sets", "year": "1965" }, { "authors": "T Krassimir; S Atanassov; Stoeva", "journal": "Fuzzy sets and Systems", "ref_id": "b15", "title": "Intuitionistic fuzzy sets", "year": "1986" }, { "authors": "Mai Gehrke; Carol Walker; Elbert Walker", "journal": "structure", "ref_id": "b16", "title": "Some comments on interval valued fuzzy sets", "year": "1996" }, { "authors": "Vicenç Torra; Yasuo Narukawa", "journal": "IEEE", "ref_id": "b17", "title": "On hesitant fuzzy sets and decision", "year": "2009" }, { "authors": "Bin Zhu; Zeshui Xu; Meimei Xia", "journal": "Journal of applied mathematics", "ref_id": "b18", "title": "Dual hesitant fuzzy sets", "year": "2012" }, { "authors": " Ronald R Yager", "journal": "IEEE", "ref_id": "b19", "title": "Pythagorean fuzzy subsets", "year": "2013" }, { "authors": "T Krassimir; Atanassov; T Krassimir; Atanassov", "journal": "", "ref_id": "b20", "title": "Interval valued intuitionistic fuzzy sets. Intuitionistic fuzzy sets: Theory and applications", "year": "1999" }, { "authors": "Deng-Feng Li", "journal": "Applied Soft Computing", "ref_id": "b21", "title": "Closeness coefficient based nonlinear programming method for interval-valued intuitionistic fuzzy multiattribute decision making with incomplete preference information", "year": "2011" }, { "authors": "Ze-Shui Xu; Chen Jian", "journal": "Systems Engineering-Theory & Practice", "ref_id": "b22", "title": "Approach to group decision making based on interval-valued intuitionistic judgment matrices", "year": "2007" }, { "authors": "Ke Xu; Jianzhong Zhou; Ran Gu; Hui Qin", "journal": "Expert Systems with Applications", "ref_id": "b23", "title": "Approach for aggregating interval-valued intuitionistic fuzzy information and its application to reservoir operation", "year": "2011" }, { "authors": "H-J Zimmermann; P Zysno", "journal": "Fuzzy sets and systems", "ref_id": "b24", "title": "Latent connectives in human decision making", "year": "1980" } ]
[ { "formula_coordinates": [ 4, 63.74, 162.95, 437.8, 13.73 ], "formula_id": "formula_0", "formula_text": "𝐴 = { < 𝑥, (𝜇 𝐴 (𝑥), 𝑣 𝐴 (𝑥)) > |𝑥 ∈ 𝑋 } . (1" }, { "formula_coordinates": [ 4, 501.55, 164.11, 3.87, 9.96 ], "formula_id": "formula_1", "formula_text": ")" }, { "formula_coordinates": [ 4, 63.74, 210.66, 441.68, 14.24 ], "formula_id": "formula_2", "formula_text": "𝜇 𝐴 (𝑥) = [𝜇 - 𝐴 (𝑥), 𝜇 + 𝐴 (𝑥)], 𝑣 𝐴 (𝑥) = [𝑣 - 𝐴 (𝑥), 𝑣 + 𝐴 (𝑥)],(2)" }, { "formula_coordinates": [ 4, 89.88, 229.78, 410.24, 14.24 ], "formula_id": "formula_3", "formula_text": "𝜇 𝐴 (𝑥) ⊆ [0, 1], 𝑣 𝐴 (𝑥) ⊆ [0, 1] and 0 ≤ 𝜇 𝐴 (𝑥) + 𝑣 𝐴 (𝑥) ≤ 1. When 𝜇 - 𝐴 (𝑥) = 𝜇 + 𝐴 (𝑥) and 𝑣 - 𝐴 (𝑥) = 𝑣 + 𝐴 (𝑥)" }, { "formula_coordinates": [ 4, 63.74, 271.64, 437.8, 14.24 ], "formula_id": "formula_4", "formula_text": "𝜋 𝐴 (𝑥) = [𝜋 - 𝐴 (𝑥), 𝜋 + 𝐴 (𝑥)] = [1 -𝜇 + 𝐴 (𝑥) -𝑣 + 𝐴 (𝑥), 1 -𝜇 - 𝐴 (𝑥) -𝑣 - 𝐴 (𝑥)]. (3" }, { "formula_coordinates": [ 4, 38.84, 273.06, 466.58, 64.62 ], "formula_id": "formula_5", "formula_text": ") Def 2. If 𝐴 𝑥 =< 𝜇 𝐴 (𝑥), 𝑣 𝐴 (𝑥) >=< [𝜇 - 𝐴 (𝑥), 𝜇 + 𝐴 (𝑥)], [𝑣 - 𝐴 (𝑥), 𝑣 + 𝐴 (𝑥)] >, 𝐵 𝑥 =< 𝜇 𝐵 (𝑥), 𝑣 𝐵 (𝑥) >=< [𝜇 - 𝐵 (𝑥), 𝜇 + 𝐵 (𝑥)], [𝑣 - 𝐵 (𝑥), 𝑣 + 𝐵 (𝑥)" }, { "formula_coordinates": [ 4, 38.84, 367.67, 466.58, 105.9 ], "formula_id": "formula_6", "formula_text": "𝐴 𝑥 ⊕ 𝐵 𝑥 =< [𝜇 - 𝐴 (𝑥) + 𝜇 - 𝐵 (𝑥) -𝜇 - 𝐴 (𝑥) ⋅ 𝜇 - 𝐵 (𝑥), 𝜇 + 𝐴 (𝑥) + 𝜇 + 𝐵 (𝑥) -𝜇 + 𝐴 (𝑥) ⋅ 𝜇 + 𝐵 (𝑥)], [𝑣 - 𝐴 (𝑥) ⋅ 𝑣 - 𝐵 (𝑥), 𝑣 + 𝐴 (𝑥) ⋅ 𝑣 + 𝐵 (𝑥)] >; 2. 𝐴 𝑥 ⊗ 𝐵 𝑥 =< [𝜇 - 𝐴 (𝑥) ⋅ 𝜇 - 𝐵 (𝑥), 𝜇 + 𝐴 (𝑥) ⋅ 𝜇 + 𝐵 (𝑥)], [𝑣 - 𝐴 (𝑥) + 𝑣 - 𝐵 (𝑥) -𝑣 - 𝐴 (𝑥) ⋅ 𝑣 - 𝐵 (𝑥), 𝑣 + 𝐴 (𝑥) + 𝑣 + 𝐵 (𝑥) -𝑣 + 𝐴 (𝑥) ⋅ 𝑣 + 𝐵 (𝑥)] >; 3. 𝜆 ⋅ 𝐴 𝑥 =< [1 -(1 -𝜇 - 𝐴 (𝑥)) 𝜆 , 1 -(1 -𝜇 + 𝐴 (𝑥)) 𝜆 ], [(𝑣 - 𝐴 (𝑥)) 𝜆 , (𝑣 + 𝐴 (𝑥)) 𝜆 ] >; 4. (𝐴 𝑥 ) 𝜆 =< [(𝜇 - 𝐴 (𝑥)) 𝜆 , (𝜇 + 𝐴 (𝑥)) 𝜆 ], [1 -(1 -𝑣 - 𝐴 (𝑥)) 𝜆 , 1 -(1 -𝑣 + 𝐴 (𝑥)) 𝜆 ] > . Def 3. If 𝐴 𝑥 =< 𝜇 𝐴 (𝑥), 𝑣 𝐴 (𝑥) >=< [𝜇 - 𝐴 (𝑥), 𝜇 + 𝐴 (𝑥)], [𝑣 - 𝐴 (𝑥), 𝑣 + 𝐴 (𝑥)] >, 𝐵 𝑥 =< 𝜇 𝐵 (𝑥), 𝑣 𝐵 (𝑥) >=< [𝜇 - 𝐵 (𝑥), 𝜇 + 𝐵 (𝑥)], [𝑣 - 𝐵 (𝑥), 𝑣 + 𝐵 (𝑥)" }, { "formula_coordinates": [ 4, 63.74, 507.72, 437.8, 31.06 ], "formula_id": "formula_7", "formula_text": "𝑝 -(𝐴 𝑥 ⊇ 𝐵 𝑥 ) = 𝑚𝑎𝑥 { 1 -𝑚𝑎𝑥 { (1 -𝑣 - 𝐵 (𝑥)) -𝜇 - 𝐴 (𝑥) (1 -𝜇 - 𝐴 (𝑥) -𝑣 + 𝐴 (𝑥)) + (1 -𝜇 + 𝐵 (𝑥) -𝑣 - 𝐵 (𝑥)) , 0 } , 0 } . (4" }, { "formula_coordinates": [ 4, 501.55, 518.06, 3.87, 9.96 ], "formula_id": "formula_8", "formula_text": ")" }, { "formula_coordinates": [ 4, 63.74, 562.64, 437.8, 31.06 ], "formula_id": "formula_9", "formula_text": "𝑝 + (𝐴 𝑥 ⊇ 𝐵 𝑥 ) = 𝑚𝑎𝑥 { 1 -𝑚𝑎𝑥 { (1 -𝑣 + 𝐵 (𝑥)) -𝜇 + 𝐴 (𝑥) (1 -𝜇 + 𝐴 (𝑥) -𝑣 - 𝐴 (𝑥)) + (1 -𝜇 - 𝐵 (𝑥) -𝑣 + 𝐵 (𝑥)) , 0 } , 0 } . (5" }, { "formula_coordinates": [ 4, 501.55, 572.97, 3.87, 9.96 ], "formula_id": "formula_10", "formula_text": ")" }, { "formula_coordinates": [ 4, 63.74, 617.21, 441.68, 21.48 ], "formula_id": "formula_11", "formula_text": "𝑝(𝐴 𝑥 ⊇ 𝐵 𝑥 ) = 1 2 (𝑝 -(𝐴 𝑥 ⊇ 𝐵 𝑥 ) + 𝑝 + (𝐴 𝑥 ⊇ 𝐵 𝑥 )),(6)" }, { "formula_coordinates": [ 4, 51.29, 669.45, 137.85, 23.24 ], "formula_id": "formula_12", "formula_text": "1. 0 ≤ 𝑝(𝐴 𝑥 ⊇ 𝐵 𝑥 ) ≤ 1; 2. 𝑝(𝐴 𝑥 ⊇ 𝐵 𝑥 ) + 𝑝(𝐴 𝑥 ⊆ 𝐵 𝑥 ) = 1." }, { "formula_coordinates": [ 5, 63.74, 617.09, 437.8, 16.68 ], "formula_id": "formula_13", "formula_text": "𝐴 (𝑖) = ( 𝐴 1 (𝑖) 𝐴 2 (𝑖) ⋯ 𝐴 𝑙 (𝑖) ) , (7" }, { "formula_coordinates": [ 5, 501.55, 621.32, 3.87, 9.96 ], "formula_id": "formula_14", "formula_text": ")" }, { "formula_coordinates": [ 5, 66.76, 648.67, 121.58, 14.83 ], "formula_id": "formula_15", "formula_text": "𝐴 𝑘 (𝑖) = ( 𝐴 𝑘 𝑖1 𝐴 𝑘 𝑖2 ⋯ 𝐴 𝑘 𝑖𝑛" }, { "formula_coordinates": [ 6, 63.74, 111.29, 437.8, 16.82 ], "formula_id": "formula_16", "formula_text": "𝐴 𝑘 (𝑖) = ( 𝜇 𝑘- 𝑖1 ⋯ 𝜇 𝑘- 𝑖𝑛 𝜇 𝑘+ 𝑖1 ⋯ 𝜇 𝑘+ 𝑖𝑛 𝑣 𝑘- 𝑖1 ⋯ 𝑣 𝑘- 𝑖𝑛 𝑣 𝑘+ 𝑖1 ⋯ 𝑣 𝑘+ 𝑖𝑛 ) 𝑇 . (8" }, { "formula_coordinates": [ 6, 501.55, 115.1, 3.87, 9.96 ], "formula_id": "formula_17", "formula_text": ")" }, { "formula_coordinates": [ 6, 63.74, 155.57, 437.8, 159.48 ], "formula_id": "formula_18", "formula_text": "𝒃 (𝑖) = 𝑙 ∑ 𝑘=1 𝑤 𝑘 ⋅ 𝐴 𝑘 (𝑖) = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ ∑ 𝑙 𝑘=1 𝑤 𝑘 ⋅ 𝜇 𝑘- 𝑖1 ⋮ ∑ 𝑙 𝑘=1 𝑤 𝑘 ⋅ 𝜇 𝑘- 𝑖𝑛 ∑ 𝑙 𝑘=1 𝑤 𝑘 ⋅ 𝜇 𝑘+ 𝑖1 ⋮ ∑ 𝑙 𝑘=1 𝑤 𝑘 ⋅ 𝜇 𝑘+ 𝑖𝑛 ∑ 𝑙 𝑘=1 𝑤 𝑘 ⋅ 𝑣 𝑘- 𝑖1 ⋮ ∑ 𝑙 𝑘=1 𝑤 𝑘 ⋅ 𝑣 𝑘- 𝑖𝑛 ∑ 𝑙 𝑘=1 𝑤 𝑘 ⋅ 𝑣 𝑘+ 𝑖1 ⋮ ∑ 𝑙 𝑘=1 𝑤 𝑘 ⋅ 𝑣 𝑘+ 𝑖𝑛 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ , (9" }, { "formula_coordinates": [ 6, 501.55, 229.68, 3.87, 9.96 ], "formula_id": "formula_19", "formula_text": ")" }, { "formula_coordinates": [ 6, 246.34, 350.31, 89.25, 11.1 ], "formula_id": "formula_20", "formula_text": "𝐴 (1) 𝐴 (2) ⋯ 𝐴 (𝑚)" }, { "formula_coordinates": [ 6, 63.74, 410.16, 437.53, 19.73 ], "formula_id": "formula_21", "formula_text": "𝑑 (𝑘) = ‖ ‖ ‖ 𝐴 (𝑘) -𝒃 ‖ ‖ ‖2 . (10" }, { "formula_coordinates": [ 6, 501.27, 411.58, 4.15, 9.96 ], "formula_id": "formula_22", "formula_text": ")" }, { "formula_coordinates": [ 6, 63.74, 453.91, 437.53, 63.44 ], "formula_id": "formula_23", "formula_text": "min 𝒘 𝑄(𝒘) = 𝑙 ∑ 𝑘=1 𝑑 (𝑘) 𝑠.𝑡. { ∑ 𝑙 𝑘=1 𝑤 𝑘 = 1, 0 ≤ 𝑤 𝑘 ≤ 1, 1 ≤ 𝑘 ≤ 𝑙. (11" }, { "formula_coordinates": [ 6, 501.27, 480.85, 4.15, 9.96 ], "formula_id": "formula_24", "formula_text": ")" }, { "formula_coordinates": [ 7, 492.97, 277.69, 12.45, 9.96 ], "formula_id": "formula_25", "formula_text": ")12" }, { "formula_coordinates": [ 7, 287.12, 304.94, 39.58, 18.04 ], "formula_id": "formula_26", "formula_text": "⋅ 𝑘 𝑖𝑗 ) of 𝐴 ⋅ 𝑘 𝑖𝑗" }, { "formula_coordinates": [ 7, 63.74, 341.99, 441.68, 30.96 ], "formula_id": "formula_27", "formula_text": "𝑝(𝐴 ⋅ 𝑘 𝑖𝑗 ) = 1 𝑙(𝑙 -1) ( 𝑙 ∑ 𝑘 ′ =1 𝑝(𝐴 ⋅ 𝑘 𝑖𝑗 ⊇ 𝐴 ⋅ 𝑘 ′ 𝑖𝑗 ) + 𝑙 2 -1).(13)" }, { "formula_coordinates": [ 7, 88.65, 454.96, 412.62, 33.08 ], "formula_id": "formula_28", "formula_text": "𝜏 𝑘 = 𝑒 -((𝑘-𝑢 𝑙 ) 2 ∕2⋅𝑡 2 𝑙 ) ∑ 𝑙 𝑘 ′ =1 𝑒 -((𝑘 ′ -𝑢 𝑙 ) 2 ∕2⋅𝑡 2 𝑙 ) , (14" }, { "formula_coordinates": [ 7, 501.27, 464.34, 4.15, 9.96 ], "formula_id": "formula_29", "formula_text": ")" }, { "formula_coordinates": [ 7, 88.65, 540.04, 326.38, 30.45 ], "formula_id": "formula_30", "formula_text": "𝐴 𝑖𝑗 =< [1 - 1 ∏ 𝑘=1 (1 -𝜇 ⋅ 𝜎(𝑘)- 𝑖𝑗 ) 𝜏 𝑘 , 1 - 1 ∏ 𝑘=1 𝜇 ⋅ 𝜎(𝑘)+ 𝑖𝑗 ) 𝜏 𝑘 ], [ 1 ∏ 𝑘=1 (𝑣 ⋅ 𝜎(𝑘)- 𝑖𝑗 ) 𝜏 𝑘 , 1 ∏ 𝑘=1 (𝑣 ⋅ 𝜎(𝑘)+ 𝑖𝑗 ) 𝜏 𝑘 ] >," }, { "formula_coordinates": [ 7, 63.74, 652.95, 437.53, 16.91 ], "formula_id": "formula_31", "formula_text": "𝐴 + = { < 𝑥 𝑗 , ([𝜇 - +𝑗 , 𝜇 + +𝑗 ], [𝑣 - +𝑗 , 𝑣 + +𝑗 ]) > |𝑥 𝑗 ∈ 𝑋, 𝑗 = 1, 2, ⋯ , 𝑛 } , (15" }, { "formula_coordinates": [ 7, 501.27, 657.19, 4.15, 9.96 ], "formula_id": "formula_32", "formula_text": ")" }, { "formula_coordinates": [ 7, 63.74, 675.37, 437.53, 16.91 ], "formula_id": "formula_33", "formula_text": "𝐴 -= { < 𝑥 𝑗 , ([𝜇 - -𝑗 , 𝜇 + -𝑗 ], [𝑣 - -𝑗 , 𝑣 + -𝑗 ]) > |𝑥 𝑗 ∈ 𝑋, 𝑗 = 1, 2, ⋯ , 𝑛 } , (16" }, { "formula_coordinates": [ 7, 501.27, 679.6, 4.15, 9.96 ], "formula_id": "formula_34", "formula_text": ")" }, { "formula_coordinates": [ 8, 63.74, 76.68, 3.03, 8.73 ], "formula_id": "formula_35", "formula_text": "[" }, { "formula_coordinates": [ 8, 48.39, 206.02, 452.88, 68.5 ], "formula_id": "formula_36", "formula_text": "𝐶𝐶(𝐴 𝑖 ) = 𝑛 ∑ 𝑗=1 𝑝((𝐴 𝑖𝑗 ⊇ 𝐴 -𝑗 |𝑥 𝑗 ∈ 𝑋 𝑏 ), (𝐴 -𝑗 ⊇ 𝐴 𝑖𝑗 |𝑥 𝑗 ∈ 𝑋 𝑐 ))𝑤 𝑗 ⋅ { 𝑛 ∑ 𝑗=1 [(𝑝(𝐴 +𝑗 ⊇ 𝐴 𝑖𝑗 ) + 𝑝(𝐴 𝑖𝑗 ⊇ 𝐴 -𝑗 )|𝑥 𝑗 ∈ 𝑋 𝑏 ), (𝑝(𝐴 𝑖𝑗 ⊇ 𝐴 +𝑗 ) + 𝑝(𝐴 -𝑗 ⊇ 𝐴 𝑖𝑗 )|𝑥 𝑗 ∈ 𝑋 𝑐 )]𝑤 𝑗 } -1 , (17" }, { "formula_coordinates": [ 8, 501.27, 236.13, 4.15, 9.96 ], "formula_id": "formula_37", "formula_text": ")" }, { "formula_coordinates": [ 8, 63.74, 472.8, 437.53, 45.75 ], "formula_id": "formula_38", "formula_text": "max { 𝐶𝐶(𝐴 1 ), 𝐶𝐶(𝐴 2 ), ⋯ , 𝐶𝐶(𝐴 𝑚 ) } 𝑠.𝑡. { ∑ 𝑛 𝑗=1 𝑤 𝑗 = 1, 𝑤 𝑗 ≥ 0, 𝑗 = 1, 2, ⋯ , 𝑛. (18" }, { "formula_coordinates": [ 8, 501.27, 491.26, 4.15, 9.96 ], "formula_id": "formula_39", "formula_text": ")" }, { "formula_coordinates": [ 8, 63.74, 564.02, 437.53, 44.78 ], "formula_id": "formula_40", "formula_text": "max 𝜗 𝑠.𝑡. { 𝐶𝐶(𝐴 𝑖 ) ≥ 𝜗, 𝑖 = 1, 2, ⋯ , 𝑚, ( 𝑤 1 𝑤 2 ⋯ 𝑤 𝑛 ) ∈ Γ 0 . (19" }, { "formula_coordinates": [ 8, 501.27, 581.68, 4.15, 9.96 ], "formula_id": "formula_41", "formula_text": ")" }, { "formula_coordinates": [ 8, 63.74, 639.6, 437.53, 30.61 ], "formula_id": "formula_42", "formula_text": "Γ 0 = { ( 𝑤 1 𝑤 2 ⋯ 𝑤 𝑛 ) | 𝑛 ∑ 𝑗=1 𝑤 𝑗 = 1, 𝑤 𝑗 ≥ 0, 𝑗 = 1, 2, ⋯ , 𝑛 } , (20" }, { "formula_coordinates": [ 8, 501.27, 649.94, 4.15, 9.96 ], "formula_id": "formula_43", "formula_text": ")" }, { "formula_coordinates": [ 9, 63.74, 110.99, 441.68, 16.23 ], "formula_id": "formula_44", "formula_text": "𝑒 - (1)𝑗 1 𝑗 2 , 𝑒 - (2)𝑗 1 𝑗 2 , 𝑒 - (3)𝑗 1 𝑗 2 𝑗 3 , 𝑒 - (4)𝑗 1 , 𝑒 + (4)𝑗 1 , 𝑒 - (5)𝑗 1 𝑗 2 (𝑗 1 ≠ 𝑗 2 ≠ 𝑗 3 ),(21)" }, { "formula_coordinates": [ 9, 88.65, 169.7, 311.91, 18.71 ], "formula_id": "formula_45", "formula_text": "Γ 1 = { ( 𝑤 1 𝑤 2 ⋯ 𝑤 𝑛 ) ∈ Γ 0 |𝑤 𝑗 1 + 𝑒 - (1)𝑗 1 𝑗 2 ≥ 𝑤 𝑗 2 , 𝑗 1 ∈ Υ 1 , 𝑗 2 ∈ Λ 1 } ," }, { "formula_coordinates": [ 9, 88.65, 228.29, 341.06, 18.71 ], "formula_id": "formula_46", "formula_text": "Γ 2 = { ( 𝑤 1 𝑤 2 ⋯ 𝑤 𝑛 ) ∈ Γ 0 |𝑤 𝑗 1 -𝑤 𝑗 2 + 𝑒 - (2)𝑗 1 𝑗 2 ≥ 𝛿 ′ 𝑗 1 𝑗 2 , 𝑗 1 ∈ Υ 2 , 𝑗 2 ∈ Λ 2 } ," }, { "formula_coordinates": [ 9, 90.55, 255.17, 105.93, 15.31 ], "formula_id": "formula_47", "formula_text": "𝛿 ′ 𝑗 1 𝑗 2 is a constant and 𝛿 ′ 𝑗 1 𝑗 2" }, { "formula_coordinates": [ 9, 88.65, 288.94, 401.65, 18.71 ], "formula_id": "formula_48", "formula_text": "Γ 3 = { ( 𝑤 1 𝑤 2 ⋯ 𝑤 𝑛 ) ∈ Γ 0 |𝑤 𝑗 1 -2𝑤 𝑗 2 + 𝑤 𝑗 3 + 𝑒 - (3)𝑗 1 𝑗 2 𝑗 3 ≥ 0, 𝑗 1 ∈ Υ 3 , 𝑗 2 ∈ Λ 3 , 𝑗 3 ∈ Ω 3 } ," }, { "formula_coordinates": [ 9, 88.65, 347.53, 363.45, 19.04 ], "formula_id": "formula_49", "formula_text": "Γ 4 = { ( 𝑤 1 𝑤 2 ⋯ 𝑤 𝑛 ) ∈ Γ 0 |𝑤 𝑗 1 + 𝑒 - (4)𝑗 1 ≥ 𝛿 𝑗 1 , 𝑤 𝑗 1 -𝑒 + (4)𝑗 1 ≤ 𝛿 𝑗 1 + 𝜀 𝑗 1 , 𝑗 1 ∈ Υ 4 } ," }, { "formula_coordinates": [ 9, 63.74, 374.63, 441.68, 21.92 ], "formula_id": "formula_50", "formula_text": "𝛿 𝑗 1 ≥ 0, 𝜀 𝑗 1 ≥ 0, 0 ≤ 𝛿 𝑗 1 ≤ 𝛿 𝑗 1 + 𝜀 𝑗 1 ≤ 1, Υ 4 is a subset in index set 𝑁." }, { "formula_coordinates": [ 9, 63.74, 418.07, 349.65, 54.39 ], "formula_id": "formula_51", "formula_text": "Γ 5 = { ( 𝑤 1 𝑤 2 ⋯ 𝑤 𝑛 ) ∈ Γ 0 | 𝑤 𝑗 1 𝑤 𝑗 2 + 𝑒 - (5)𝑗 1 𝑗 2 ≥ 𝛿 ′′ 𝑗 1 𝑗 2 , 𝑗 1 ∈ Υ 5 , 𝑗 2 ∈ Λ 5 } , where 𝛿 ′′ 𝑗 1 𝑗 2 is a constant and 0 ≤ 𝛿 ′ 𝑗 1 𝑗 2" }, { "formula_coordinates": [ 9, 282.42, 476.55, 113.53, 11.6 ], "formula_id": "formula_52", "formula_text": "Γ = Γ 1 ∪ Γ 2 ∪ Γ 3 ∪ Γ 4 ∪ Γ 5 ." }, { "formula_coordinates": [ 9, 63.74, 541.17, 441.68, 155.26 ], "formula_id": "formula_53", "formula_text": "max { 𝐶𝐶(𝐴 1 ), 𝐶𝐶(𝐴 2 ), ⋯ , 𝐶𝐶(𝐴 𝑚 ) } min { ∑ 𝑗 1 ,𝑗 2 ,𝑗 3 ∈𝑁 (𝑒 - (1)𝑗 1 𝑗 2 + 𝑒 - (2)𝑗 1 𝑗 2 + 𝑒 - (3)𝑗 1 𝑗 2 𝑗 3 + 𝑒 - (4)𝑗 1 + 𝑒 + (4)𝑗 1 + 𝑒 - (5)𝑗 1 𝑗 2 ) } 𝑠.𝑡. ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ ( 𝑤 1 𝑤 2 ⋯ 𝑤 𝑛 ) ∈ Γ. 𝑒 - (1)𝑗 1 𝑗 2 ≥ 0, 𝑗 1 ∈ Υ 1 , 𝑗 2 ∈ Λ 1 . 𝑒 - (2)𝑗 1 𝑗 2 ≥ 0, 𝑗 1 ∈ Υ 2 , 𝑗 2 ∈ Λ 2 . 𝑒 - (3)𝑗 1 𝑗 2 𝑗 3 ≥ 0, 𝑗 1 ∈ Υ 3 , 𝑗 2 ∈ Λ 3 , 𝑗 3 ∈ Ω 3 . 𝑒 - (4)𝑗 1 ≥ 0, 𝑒 + (4)𝑗 1 ≥ 0, 𝑗 1 ∈ Υ 4 . 𝑒 - (5)𝑗 1 𝑗 2 ≥ 0, 𝑗 1 ∈ Υ 5 , 𝑗 2 ∈ Λ 5 .(22)" }, { "formula_coordinates": [ 10, 63.74, 83.98, 437.53, 153.46 ], "formula_id": "formula_54", "formula_text": "max 𝜗 𝑠.𝑡. ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ 𝐶𝐶(𝐴 𝑖 ) ≥ 𝜗, 𝑖 = 1, 2, ⋯ , 𝑚, - ∑ 𝑗 1 ,𝑗 2 ,𝑗 3 ∈𝑁 (𝑒 - (1)𝑗 1 𝑗 2 + 𝑒 - (2)𝑗 1 𝑗 2 + 𝑒 - (3)𝑗 1 𝑗 2 𝑗 3 + 𝑒 - (4)𝑗 1 + 𝑒 + (4)𝑗 1 + 𝑒 - (5)𝑗 1 𝑗 2 ) ≥ 𝜗, ( 𝑤 1 𝑤 2 ⋯ 𝑤 𝑛 ) ∈ Γ. 𝑒 - (1)𝑗 1 𝑗 2 ≥ 0, 𝑗 1 ∈ Υ 1 , 𝑗 2 ∈ Λ 1 . 𝑒 - (2)𝑗 1 𝑗 2 ≥ 0, 𝑗 1 ∈ Υ 2 , 𝑗 2 ∈ Λ 2 . 𝑒 - (3)𝑗 1 𝑗 2 𝑗 3 ≥ 0, 𝑗 1 ∈ Υ 3 , 𝑗 2 ∈ Λ 3 , 𝑗 3 ∈ Ω 3 . 𝑒 - (4)𝑗 1 ≥ 0, 𝑒 + (4)𝑗 1 ≥ 0, 𝑗 1 ∈ Υ 4 . 𝑒 - (5)𝑗 1 𝑗 2 ≥ 0, 𝑗 1 ∈ Υ 5 , 𝑗 2 ∈ Λ 5 . (23" }, { "formula_coordinates": [ 10, 501.27, 153.56, 4.15, 9.96 ], "formula_id": "formula_55", "formula_text": ")" }, { "formula_coordinates": [ 12, 93.51, 102.62, 375.02, 367.7 ], "formula_id": "formula_56", "formula_text": "𝐷 3 𝐴 1 𝑥 1 VH VH H 𝑥 2 M M L 𝑥 3 M M H 𝑥 4 M M L 𝑥 5 M L M 𝐴 2 𝑥 1 H H VH 𝑥 2 M H M 𝑥 3 VH H VH 𝑥 4 VH VH H 𝑥 5 L L VL 𝐴 3 𝑥 1 M L M 𝑥 2 L L M 𝑥 3 VL VL L 𝑥 4 L VL VL 𝑥 5 H VH VH 𝐴 4 𝑥 1 M H H 𝑥 2 VL M L 𝑥 3 L M L 𝑥 4 M H L 𝑥 5 VH H VH 𝐴 5 𝑥 1 VH VH H 𝑥 2 M M L 𝑥 3 M M H 𝑥 4 M M L 𝑥 5 M L M" }, { "formula_coordinates": [ 12, 97.42, 523.69, 128.5, 63.36 ], "formula_id": "formula_57", "formula_text": "min 𝒘 𝑄(𝒘) = 3 ∑ 𝑘=1 𝑑 (𝑘) 𝑠.𝑡. { ∑ 3 𝑘=1 𝑤 𝑘 = 1, 0 ≤ 𝑤 𝑘 ≤ 1, 1 ≤ 𝑘 ≤ 3." }, { "formula_coordinates": [ 13, 97.42, 345.8, 38.57, 191.44 ], "formula_id": "formula_58", "formula_text": "max 𝜗 𝑠.𝑡. ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ 0." } ]
2023-11-27
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b8", "b24", "b42", "b39", "b43", "b16", "b20", "b6", "b27", "b39", "b25", "b43" ], "table_ref": [], "text": "Recognizing a place solely from images becomes a challenging task when scenes undergo substantial changes in their structure or appearance. Such capability is referred to in the scientific and technical literature as visual place recognition (and by its acronym VPR), and is essential for agents to navigate and understand their surroundings autonomously in a wide array of applications, such as robotics [12-14, 22, 29] or augmented reality [19]. Specifically, it is present in simultaneous localization and mapping [9,10] and absolute pose estimation [25,43] pipelines.\nIn practice, VPR is framed as an image retrieval problem, wherein typically a query image serves as the input and the goal is to obtain an ordered list of top-k matches against a pre-existing database of geo-localized reference images. R@1 R@5 R@10 R@1 R@5 R@10 Figure 1. Illustration of a VPR baseline (left) and our contribution (right). The left column outlines a typical VPR baseline, a ResNet backbone followed by NetVLAD aggregation [4]. On the right column, we replace ResNet with a partially fine-tuned DI-NOv2 [40] backbone, and incorporate SALAD, our novel optimal transport aggregation using the Sinkhorn Algorithm. Our model achieves unprecedented state-of-the-art results on common VPR benchmarks.\nImages are represented as an aggregation of appearance pattern descriptors, which are subsequently compared via nearest neighbour. The effectiveness of this matching relies on generating discriminative per-image descriptors that exhibit robust performance even for challenging variations such as fluctuating illumination, structural transformations, temporal changes, weather and seasonal shifts. Most recent research on VPR have thus focused on the two key compo-nents of this general pipeline, namely the deep neural backbones for feature extraction and methods for aggregating such features. For years, ResNet-based neural networks have been the predominant backbones for feature extraction [4,23,44]. Recently, given the success of Vision Transformer (ViT) for different computer vision tasks [17,21,30,33], some methods have introduced ViT in the field of VPR [57,64]. AnyLoc [28] proposed to leverage foundation models, using DINOv2 [40] as a feature extractor for VPR. However, AnyLoc uses DINOv2 'as is', while we show in this paper that fine-tuning the model for VPR brings a significant increase in performance.\nRegarding aggregation, the handcrafted VLAD [26] and its learned counterpart NetVLAD [4] are among the most popular choices. Basically, they aggregate local descriptors by quantizing them into a set of clusters and storing the sum of residuals per cluster. Alternative methods include pooling layers like GeM [44] or learned global aggregation, like the recent MixVPR [2]. In this paper, we propose optimal transport aggregation, setting a new state of the art in VPR.\nAs a summary, in this work, we present a single-stage approach to VPR that obtains state-of-the-art results in the most common benchmarks. To achieve this, we present two key contributions:\n• First, we propose SALAD (Sinkhorn Algorithm for Locally Aggregated Descriptors), a reformulation of the feature-to-cluster assignment problem through the lens of optimal transport, allowing more effective distribution of local features into the global descriptor bins. To further improve the discriminative power of the aggregated descriptor, we let the network discard uninformative features by introducing a 'dustbin' mechanism. • Secondly, we integrate the representational power of foundation models into VPR, using DINOv2 model as the backbone for feature extraction. Unlike previous approaches that utilized DINOv2 in its pre-trained form, our method involves fine-tuning the model specifically for the task. This fine-tuning process converges extremely fast, in just four epochs, and allows DINOv2 to capture more relevant and distinctive features pertinent to place recognition tasks. The fusion of these two novel components results in DI-NOv2 SALAD, which can be efficiently trained in less than one hour and sets unprecedented recalls in VPR benchmarks, with 75.0% Recall@1 in MSLS Challenge and 76.0% in Nordland. All of this with a single-stage pipeline, without requiring expensive post-processing steps and with an inference speed of less than 3 ms per image." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b47", "b62", "b4", "b30", "b19", "b2", "b14", "b25", "b7", "b51", "b17", "b17", "b52", "b25", "b43", "b45", "b48", "b53", "b1", "b59", "b46" ], "table_ref": [], "text": "The significant research efforts on VPR have been exhaustively compiled in a number of surveys and tutorials over the years [19,35,36,48,63]. Current research addresses a wide variety of topics, such as novel loss functions [5,31], image sequences [20,59], extreme viewpoint changes [32] or text features [24]. In this section, we focus on work related to feature extraction and aggregation, as there lie our contributions.\nEarly approaches to VPR used either aggregations of handcrafted local features [3,15,26] or global descriptors [38,52]. In both cases, geometric [18] and temporal [18,37] consistency was sometimes enforced for enhanced performance. With the emergence of deep neural networks, features pre-trained for recognition tasks, without fine-tuning, showed a significant performance boost over handcrafted ones [53]. However, training or finetuning specifically for VPR tasks using contrastive or triplet losses [39] offers an additional improvement and is standard nowadays.\nNetVLAD [4] is the most popular architecture explicitly designed for VPR, mimicking the VLAD aggregation [26] but jointly learning from data both convolutional features and cluster centroids. Radenović et al. [44] proposed the Generalized Mean Pooling (GeM) to aggregate feature activations, also a popular baseline due to its simplicity and competitive performance. In addition to these, several other alternatives have been proposed in the literature. For example, Teichmann et al. [55] aggregates regions instead of local features. Recently, MixVPR [2] has presented the best results in the literature by combining deep features with a MLP layer.\nA notable trend in VPR has been the adoption of a twostage approach to enhance retrieval accuracy [11,23,46,49,54,64]. After a first stage with any of the methods presented in the previous paragraph, the top retrieved candidates are re-ranked attending to the un-aggregated local features, either assessing the geometric consistency to the query image or predicting their similarity. This re-ranking stage adds a considerable overhead, which is why it is only applied to a few candidates, but generally improves the performance. Re-ranking is out of the scope of our research but, notably, we outperform all baselines that employ re-ranking even if our model does not include such stage (and hence it is substantially faster).\nOptimal transport has found a significant number of applications in graphics and computer vision [8]. Specifically, related to our research, it has been used for image retrieval [42], image matching [60] and feature matching [47,51]. Recently, Zhang et al. [62] used optimal transport at the re-ranking stage in a retrieval pipeline. However, ours is the first work that proposes the formulation of local feature aggregation from an optimal transport perspective. " }, { "figure_ref": [], "heading": "Global Descriptor", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "DINOv2 SALAD is based on NetVLAD, but we propose to use and fine-tune the DINOv2 backbone (Sec. 3.1) and propose a novel module (SALAD) for the assignment (Sec. 3.2) and aggregation (Sec. 3.3) of features." }, { "figure_ref": [], "heading": "Local Feature Extraction", "publication_ref": [ "b27", "b39" ], "table_ref": [], "text": "Effective local feature extraction lies in striking a balance: features must be robust enough to withstand substantial changes in appearance, such as those between seasons or from day to night, yet they should retain sufficient information on local structure to enable accurate matching.\nInspired by the success of ViT architectures in many computer vision tasks and by AnyLoc [28], that leverages the exceptional representational capabilities of foundation models [7], we adopt DINOv2 [40] as our backbone. However, differently from AnyLoc, we use a supervised pipeline and include the backbone in the end-to-end training for the specific task, yielding improved performance.\nDINOv2 adopts a ViT architecture that initially divides an input image I ∈ R h×w×c into p × p patches, with p = 14. These patches are sequentially projected with transformer blocks, resulting in the output tokens {t 1 , . . . , t n , t n+1 }, t i ∈ R d , where n = hw/p 2 is the number of input patches and there is an added global token t n+1 that aggregates class information. Although the DINOv2's authors reported that fine-tuning the model only brings dim improvements, we found that at least for VPR there are substantial gains in selectively unfreezing and training the last blocks of the encoder." }, { "figure_ref": [], "heading": "Assignment", "publication_ref": [ "b46", "b46" ], "table_ref": [], "text": "In NetVLAD, a global descriptor is formed by assigning a set of features to a set of clusters, {C 1 , . . . , C j , . . . , C m }, and then aggregating all features that belong to each cluster. For the assignment, NetVLAD computes a score matrix S ∈ R n×m >0 , where the element in its i th row and j th column, s i,j ∈ R >0 , represents the cost of assigning a feature to a cluster C j . In other words, S quantifies the affinity of each feature to each clusters. While SALAD draws inspiration from NetVLAD, we identify several crucial aspects in their assignment and propose alternatives to address these.\nReduce assignment priors. When building the score matrix S, NetVLAD introduces certain priors. Specifically, it initializes the linear layer that computes S with centroids derived from k-means. While this may accelerate the training, it introduces inductive bias and potentially makes the model more susceptible to local minima. In contrast, we propose to learn each row s i of the score matrix from scratch with two fully connected layers initialized randomly:\ns i = W s2 (σ(W s1 (t i ) + b s1 )) + b s2(1)\nwhere W s1 , W s2 and b s1 , b s2 are the weights and biases of the layers, and σ is a non-linear activation function. Discard uninformative features. Some features, such as those representing the sky, might contain negligible information for VPR. NetVLAD does not account for this, and the contribution of all features is preserved in the final descriptor. Contrary, we follow recent works on keypoint matching and introduce a 'dustbin' where non-informative features are assigned to. For that, we augment the score matrix, from S to S = [S, si,m+1 ] ∈ R n×m+1 >0 , by appending the column si,m+1 representing the feature-to-dustbin relation. As in SuperGlue [47], this score is modeled with a single learnable parameter z ∈ R:\nsi,m+1 = z1 n (2) being 1 n = [1, . . . , 1] ⊤ ∈ R n a n-dimensional vector of ones.\nOptimal assignment. The original NetVLAD assignment computes a per-row softmax over S to obtain the distribution of each feature's mass across the clusters. However, this approach only considers the feature-to-cluster relationship and overlooks the reverse -the cluster-to-feature relation. For this reason, we reformulate the assignment as an optimal transport problem where the features' mass, µ = 1 n , must be effectively distributed among the clusters or the 'dustbin', κ = [1 ⊤ m , n -m] ⊤ . We follow Super-Glue [47] and use the Sinkhorn Algorithm [16, 50] to obtain the assignment P ∈ R n×(m+1) such that\nP1 m+1 = µ and P⊤ 1 n = κ.(3)\nThis algorithm finds the optimal transport assignment between distributions µ and κ iteratively normalizing rows and columns from exp S . Finally, we drop the dustbin column to obtain the assignment P = p * ,1 , . . . , p * ,m , where p * ,j stands for the j th column of P." }, { "figure_ref": [], "heading": "Aggregation", "publication_ref": [], "table_ref": [], "text": "Once the feature assignment in our SALAD framework is computed as detailed in Sec. 3.2, we focus on the aggregation of these assigned features to form the final global descriptor. The aggregation process in NetVLAD involves combining all features assigned to each cluster C j . However, we introduce three variations: Dimensionality reduction. To efficiently manage the final descriptor size, we first reduce the dimensionality of the tokens from R d to R l . This is achieved by processing the features through two fully connected layers, precisely adjusting the size of the feature vectors while retaining the essential information from the task.\nf i = W f2 (σ(W f1 (t i ) + b f1 )) + b f1(4)\nAggregation. Based on the assignment matrix derived using the Sinkhorn Algorithm, each feature is aggregated into its assigned cluster. Differently from NetVLAD, we do not subtract the centroids to get the residuals. We directly aggregate these features with a summation, reducing the incorporated priors about the aggregation. Viewing the resulting VLAD vector as a matrix V ∈ R m×l , each element V j,k ∈ R is computed as follows:\nV j,k = n i=1 P i,k • f i,k(5)\nwhere f i,k corresponds to the k th dimension of f i , with k ∈ {1, . . . , l}.\nGlobal token. To include global information about the scene not easily incorporated into local features, we also incorporate a scene descriptor g computed as:\ng = W g2 (σ(W g1 (t n+1 ) + b g1 )) + b g1(6)\nwhere t n+1 is the global token from DINOv2. We then concatenate g with V flattened. Following NetVLAD, we do an L2 intra-normalization and an entire L2 normalization of this vector, yielding the final global descriptor." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "To rigorously evaluate the effectiveness of our proposed contributions, we conducted exhaustive experiments following standard evaluation protocols. In Sec. 4.1, we present implementation details regarding architecture, training, and validation. Then, in Sec. 4.2, we compare our method with recent VPR methods, and, in Sec. 4.3, we show ablation studies that assess the importance of the different proposed contributions. Section 4.4 shows qualitative results that help to interpret SALAD's components." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b0", "b57", "b55" ], "table_ref": [], "text": "We ground our training and evaluation setups on the publicly provided framework by MixVPR1 .\nFor the architecture, we opt for a pretrained DINOv2-B as our feature extraction backbone, targeting a balance between computational efficiency and representational capacity. We keep frozen most of the backbone with exception of the the final 4 layers of the encoder. This approach enhances performance significantly without markedly increasing training time. For the fully connected layers, the weights of the hidden layers W s1 , W f1 and W g1 have 512 neurons and use ReLU for the activation function σ. To optimize feature handling, we employ a dimensionality reduction, compressing feature and global token dimensions from d = 768 to l = 128. We use m = 64 clusters, resulting in a global descriptor of size 128 × 64 + 256.\nWe train on GSV-Cities [1], a large dataset of urban locations collected from Google Street View. Given the impressive representation power of DINOv2, our pipeline achieves training convergence within just 4 complete epochs. Using a batch size of 60 places, each represented by 4 images, the training is completed in 30 minutes on a single NVIDIA 3090GTX. We use Multi-Similarity loss [58] and AdamW [34] for the optimization, with a learning rate set to 6e-5. To ensure an effective learning rate, we reduce the initial rate linearly at every step until 20% of the initial value. We use dropout of 0.3 on the score projection and Method MSLS Challenge MSLS Val NordLand Pitts250k-test SPED R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10 NetVLAD To validate our experiments and obtain the best set of hyperparameters, we monitored the recall in the Pittsburg30k-test [56]. We observed that in the long run, most configurations perform similarly, but rapid convergence on a few epochs is more sensitive to the hyperparameters." }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b43", "b4", "b5", "b55", "b13", "b6" ], "table_ref": [ "tab_0", "tab_1" ], "text": "We benchmarked our model against several single-stage baselines, namely NetVLAD [4] and GeM [44] as two representative tradicional baselines, and CosPlace [5], MixVPR [2] and EigenPlaces [6] as the three most recent and best performing baselines in the literature. The evaluation spanned a diverse array of well-established datasets: MSLS Validation and Challenge [59], which are comprised of dashcam images; Pittsburgh30k-test and Pittsburgh250ktest [56], featuring urban scenarios; SPED [14], a collection from surveillance cameras; and NordLand, notable for its seasonal variations from images captured from the front of a train traversing Norway. We use Recall@k (R@k) as the metric for all our experiments, as it is standard in related work. We use evaluation data and code from MixVPR [2], which considers retrieval as correct if an image at less than 25 meters from the query is among the top-k predicted candidates.\nAs shown in Table 1, our model outperforms all previous methods on all datasets and all metrics. It is worth highlighting the metrics saturation observed in MSLS Val, Pitts250k-test and SPED, and on the other hand the challenging nature of MSLS Challenge and NordLand, for which all baselines show lower R@k. The MSLS Challenge dataset, with its diversity, extensive size and closed labels, and NordLand, with its extreme sample similarity and seasonal shifts, emerge then as key benchmarks for assessing VPR performance. Although our DINOv2 SALAD shows a significant improvement on all benchmarks, it is precisely in MSLS Challenge and NordLand where we obtain the most substantial recall increases, with +7.6%, +11.7%, +9.6% and +17.6%, +14.6%, +12.0% for R@1,R@5,R@10 respectively over the second best.\nIn Table 2, we compare our DINOv2 SALAD method, which solely operates on a single retrieval stage, against the leading two-stage Visual Place Recognition (VPR) techniques. In this comparison, we include the best performing models in the literature, namely R2Former [64], TransVPR [57], and Patch-NetVLAD [23], which incorporate a re-ranking refinement. Note in this table how our DINOv2 SALAD, despite being orders of magnitude faster and smaller in memory, significantly outperforms all these two-stage methods on all benchmarks. This finding not only highlights the efficiency of our model but also demonstrates the effectiveness of global retrieval using our novel SALAD aggregation. Additionally, considering our method's reliance on local features, we believe that a re-ranking stage could also be applied on top of these, potentially increasing even more our recall metrics but at the price of a higher computational footprint." }, { "figure_ref": [], "heading": "MSLS Val", "publication_ref": [ "b27" ], "table_ref": [], "text": "NordLand Pitts250k-test SPED Feature Dim R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10 ResNet NetVLAD [ Table 3. Ablations. The first two rows correspond to two baselines in the literature [4,28], the rest to different aggregations appended to DINOv2 including our DINOv2 SALAD. Note that only DINO NetVLAD, with a significantly bigger descriptor size than ours, is able to show competitive results. We outperform all the rest DINOv2 baselines of similar descriptor sizes by a large margin." }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [ "b26", "b39", "b60", "b44" ], "table_ref": [ "tab_3", "tab_5", "tab_4" ], "text": "In this section, we present different studies that evaluated the efficacy of the components and configurations in our proposed methods. Effect of DINOv2. We assess the impact of the DINOv2 backbone and our optimal transport aggregation SALAD separately. For this, we compare with the existing baselines of ResNet NetVLAD or AnyLoc, this last one applying a VLAD on top of a pretrained DINOv2 encoder. We integrate the DINOv2 backbone with various aggregation modules, obtaining a handful of performant techniques that improve their respective previous results. As shown in Table 3, all of these configurations outperform the baselines, even though AnyLoc already uses DINOv2. This validates the DINOv2's integration in end-to-end fine-tuning to refine its feature extraction capabilities.\nEffect of SALAD. Our experiments in Table 3 show that aggregation also matters. Even the recent MixVPR aggregation coupled with DINOv2 does not match the performance of DINOv2 NetVLAD and DINOv2 SALAD. We believe that the DINOv2 backbone is especially suitable for local feature aggregation, as its features work remarkably well in dense visual perception tasks [27,40,61]. Although DINOv2 NetVLAD achieves comparable performance to SALAD, it employs a descriptor almost three times as big. Besides, the generalization performance of DINOv2 NetVLAD is limited, as observed in NordLand results. We attribute this to NetVLAD's priors initialization with urban scenarios, which constrain the convergence of the system. In our experiments we also trained a slimmer DINOv2 NetVLAD version, whose features are dimensionally reduced as described in Section 3.3, targetting a final descriptor of roughly the same size as SALAD. In this fairer setup, DINOv2 SALAD clearly outperforms DINOv2 NetVLAD.\nEffect of hyperparameters. DINOv2 comes in different sizes (Table 4) that affect the number of parameters, inference speed, and representation capabilities. As shown in Figure 3, more parameters do not always result in better performance. Excesively big models might be harder to train or prone to overfit the training set. From these results, we chose the DINOv2-B backbone. Regarding the dimension reduction, we observed that the Recall@1 is quite stable for different descriptor sizes, with a slight peak for 128 dimensions and worse performance after that. A similar trade-off arises in Table 6 for the number of blocks to train. We observed that fine-tuning two or four blocks report the best results without significant computation overhead. Effect of SALAD components. In Table 5, we show how different components of our SALAD pipeline affect the final performance. Both the global token, which appends global information not captured in local features, and the dustbin, which helps distill the aggregated features, contribute to the performance of SALAD. We also trained a model using a dual-softmax [45] to solve the optimal trans- " }, { "figure_ref": [], "heading": "Method", "publication_ref": [ "b40" ], "table_ref": [], "text": "MSLS Val R@1 R@ port assignment, following LoFTR and Gluestick [41,51].\nAlthough it achieves only slightly worse performance, the Sinkhorn Algorithm its theoretically sound and provides a better acronym to our method." }, { "figure_ref": [ "fig_2", "fig_3", "fig_4" ], "heading": "Introspective Results", "publication_ref": [], "table_ref": [], "text": "We provide an introspection of our model's performance through a series of illustrative figures. Figure 4 visualizes the weights that are not assigned to the 'dustbin'. This visualization offers insight into the parts of the input image that the network considers informative and validates the ef-fective use of the 'dustbin' by our SALAD aggregation. In Figure 5, we display the assignment distribution of patches from two different images depicting the same place. It demonstrates the model's ability to consistently distribute most of the weights into the same bins for patches representing similar regions. Such repeatable and consistent assignment across different images of the same place is crucial for the reliability and performance of the system. Finally, in Figure 6, we showcase various query images alongside their respective top-3 retrievals made by our system. DINOv2 SALAD is able to retrieve correct predictions even under challenging conditions, such as severe changes in illumination or viewpoint." }, { "figure_ref": [], "heading": "Conclusions and Limitations", "publication_ref": [], "table_ref": [], "text": "In this paper, we have proposed DINOv2 SALAD, a novel model for VPR that outperforms previous baselines by a substantial margin. This achievement is the result of combining two key contributions: a fine-tuned DINOv2 backbone for enhanced feature extraction and our novel SALAD (Sinkhorn Algorithm for Locally Aggregated Descriptors) module for feature aggregation. Our extensive experiments demonstrate the effectiveness of these modules, highlighting the model's single-stage nature and exceptionally fast training and inference speed.\nOur research primarily focused on standard benchmarks, predominantly outdoor environments. The use of the DI-NOv2 backbone, while effective in these contexts, might encounter limitations in vastly different scenarios from DI-NOv2's training distribution, such as medical images. Additionally, in SALAD we use an optimal transport assignment in its simplest form. More sophisticated constraints could improve the resulting assignment, a very relevant aspect for our future work. " } ]
The task of Visual Place Recognition (VPR) aims to match a query image against references from an extensive database of images from different places, relying solely on visual cues. State-of-the-art pipelines focus on the aggregation of features extracted from a deep backbone, in order to form a global descriptor for each image. In this context, we introduce SALAD (Sinkhorn Algorithm for Locally Aggregated Descriptors), which reformulates NetVLAD's soft-assignment of local features to clusters as an optimal transport problem. In SALAD, we consider both featureto-cluster and cluster-to-feature relations and we also introduce a 'dustbin' cluster, designed to selectively discard features deemed non-informative, enhancing the overall descriptor quality. Additionally, we leverage and fine-tune DINOv2 as a backbone, which provides enhanced description power for the local features, and dramatically reduces the required training time. As a result, our single-stage method not only surpasses single-stage baselines in public VPR datasets, but also surpasses two-stage methods that add a re-ranking with significantly higher cost. Code and models are available at https://github.com/serizba/salad.
Optimal Transport Aggregation for Visual Place Recognition
[ { "figure_caption": "Figure 2 .2Figure2. Overview of our method. First, the DINOv2 backbone extracts local features and a global token from an input image. Then, a small MLP, score projection, computes a score matrix for feature-to-cluster and dustbin relationships. The optimal transport module uses the Sinkhorn algorithm to transform this matrix into an assignment, and subsequently, dimensionality-reduced features are aggregated into the final descriptor based on this assignment and concatenated with the global token.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Heatmap of local features importance. Left images show the original pictures, their right counterparts represent the weights not assigned to the 'dustbin'. Note how the network learns to discard uninformative regions like skies, roads or dynamic objects, and instead focus on distinctive patterns in buildings and vegetation.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Illustration of feature-to-cluster assignments. See at the leftmost and rightmost part of the figure two different views of the same place. Framed by red and blue squares we highlight two corresponding patches in each of the images. The central part of the figure shows the feature-to-cluster assignments for these patches. Note how DINOv2 SALAD correctly assigns the features to the same bins for both views, even with different local texture.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. DINOv2 SALAD qualitative results at MSLS. The left column shows several queries and the three other ones shows the top-3 candidates retrieved by our DINOv2 SALAD. Candidates are framed in green if they correspond to the same place as the query, and in red if they do not. Note the correct retrievals under seasonal, weather, viewpoint and day-night changes. Note also a challenging failure case in the last row, due to non-disciminative image content.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Comparison against single-stage baselines. We compare our DINOv2 SALAD against two popular baselines [4, 44] and the three baselines that show best results in recent literature [2,5,6]. Our results are the state of the art in all metrics, in most cases by a significant margin. Note, in particular, the large improvement in the most challenging benchmarks, MSLS Challenge and NordLand. † We reproduced GeM and CosPlace results training during 80 epochs following MixVPR training pipeline .", "figure_data": "[4]35.147.451.782.689.692.032.647.153.390.596.297.478.788.391.4GeM [44] †49.764.267.078.286.689.621.637.344.287.094.496.366.783.488.0CosPlace [5] †60.971.776.785.490.792.343.860.166.892.497.298.179.690.492.8MixVPR [2]64.075.980.688.092.794.658.474.680.094.698.399.085.292.194.6EigenPlaces [6]67.477.181.789.393.795.054.468.874.194.198.098.769.982.987.6DINOv2 SALAD (ours) 75.088.891.392.296.497.076.089.292.095.198.599.192.196.296.5MethodFeature Dim Global LocalMemory (GB)Latency (ms) Retrieval RerankingMSLS Challenge R@1 R@5 R@10MSLS Val R@1 R@5 R@10Patch-NetVLAD [23]40962826 × 4096908.309.558377.1748.157.660.579.586.287.7TransVPR [57]2561200 × 25622.726.271757.7063.974.077.586.891.292.4R2Former [64]256500 × 1314.78.88202.3773.085.988.889.795.096.2DINOv2 SALAD (ours) 8192 + 2560.00.632.410.075.088.891.392.296.497.0", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison against baselines with re-ranking. We compare our single-stage DINOv2 SALAD with methods that perform a re-ranking stage to improve performance. Without using re-ranking, our DINOv2 SALAD outperforms all other methods while being orders of magnitude faster. Latency metrics obtained from [64] using a RTX A5000 GPU. Latency for DINOv2 SALAD was computed using a RTX 3090. Memory footprint is calculated on the MSLS Val dataset, which includes around 18, 000 images.", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "DINOv2 configurations. We chose DINOv2-B for its balance between capacity and size.", "figure_data": "DINO SALADFigure 3. Recall@1 in MSLS Val for different configurations ofthe DINOv2 backbone and the dimension of the aggregated fea-tures, l.Model Dim. size Num. blocks Num. parametersS3841221MB7681286ML102424300MG1536401100M", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation study of the SALAD components. All components play a role in our state-of-the-art results. In general, the dustbin component is the most influential in the recall, then the global token, and finally the optimal transport method chosen.", "figure_data": "5 R@10DINOv2 SALAD w/o dustbin91.495.896.2DINOv2 SALAD w/o global token 91.896.096.2DINOv2 SALAD (Dual Softmax)91.995.796.5DINOv2 SALAD92.296.497.0MethodMSLS Val R@1 R@5 R@10DINOv2 SALAD (frozen)88.595.096.2DINOv2 SALAD (train 2 last blocks) 92.096.597.0DINOv2 SALAD (train 4 last blocks) 92.296.497.0DINOv2 SALAD (train 6 last blocks) 91.696.297.0DINOv2 SALAD (train all blocks)89.295.196.1", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Effect of fine-tuning different number of DINOv2 blocks.", "figure_data": "", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" } ]
Sergio Izquierdo; Javier Civera
[ { "authors": "Amar Ali-Bey; Brahim Chaib-Draa; Philippe Giguère", "journal": "Neurocomputing", "ref_id": "b0", "title": "Gsv-cities: Toward appropriate supervised visual place recognition", "year": "2022" }, { "authors": "Amar Ali-Bey; Brahim Chaib-Draa; Philippe Giguere", "journal": "", "ref_id": "b1", "title": "Mixvpr: Feature mixing for visual place recognition", "year": "2023" }, { "authors": "Relja Arandjelovic; Andrew Zisserman", "journal": "", "ref_id": "b2", "title": "All about vlad", "year": "2013" }, { "authors": "Relja Arandjelovic; Petr Gronat; Akihiko Torii; Tomas Pajdla; Josef Sivic", "journal": "", "ref_id": "b3", "title": "Netvlad: Cnn architecture for weakly supervised place recognition", "year": "2016" }, { "authors": "Gabriele Berton; Carlo Masone; Barbara Caputo", "journal": "", "ref_id": "b4", "title": "Rethinking visual geo-localization for large-scale applications", "year": "2022" }, { "authors": "Gabriele Berton; Gabriele Trivigno; Barbara Caputo; Carlo Masone", "journal": "", "ref_id": "b5", "title": "Eigenplaces: Training viewpoint robust models for visual place recognition", "year": "2023" }, { "authors": "Rishi Bommasani; Drew A Hudson; Ehsan Adeli; Russ Altman; Simran Arora; Sydney Von Arx; Jeannette Michael S Bernstein; Antoine Bohg; Emma Bosselut; Brunskill", "journal": "", "ref_id": "b6", "title": "On the opportunities and risks of foundation models", "year": "2021" }, { "authors": "Nicolas Bonneel; Julie Digne", "journal": "Wiley Online Library", "ref_id": "b7", "title": "A survey of optimal transport for computer graphics and computer vision", "year": "2023" }, { "authors": "Cesar Cadena; Luca Carlone; Henry Carrillo; Yasir Latif; Davide Scaramuzza; José Neira; Ian Reid; John J Leonard", "journal": "IEEE Transactions on robotics", "ref_id": "b8", "title": "Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age", "year": "2016" }, { "authors": "Carlos Campos; Richard Elvira; Juan J Gómez Rodríguez; José Mm Montiel; Juan D Tardós", "journal": "IEEE Transactions on Robotics", "ref_id": "b9", "title": "Orb-slam3: An accurate open-source library for visual, visual-inertial, and multimap slam", "year": "2021" }, { "authors": "Bingyi Cao; Andre Araujo; Jack Sim", "journal": "Springer", "ref_id": "b10", "title": "Unifying deep local and global features for image search", "year": "2020" }, { "authors": "Zetao Chen; Adam Jacobson; Niko Sünderhauf; Ben Upcroft; Lingqiao Liu; Chunhua Shen; Ian Reid; Michael Milford", "journal": "IEEE", "ref_id": "b11", "title": "Deep learning features at scale for visual place recognition", "year": "2017" }, { "authors": "Zetao Chen; Fabiola Maffra; Inkyu Sa; Margarita Chli", "journal": "IEEE", "ref_id": "b12", "title": "Only look once, mining distinctive landmarks from convnet for visual place recognition", "year": "2017" }, { "authors": "Zetao Chen; Lingqiao Liu; Inkyu Sa; Zongyuan Ge; Margarita Chli", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b13", "title": "Learning context flexible attention model for long-term visual place recognition", "year": "2018" }, { "authors": "Mark Cummins; Paul Newman", "journal": "The International journal of robotics research", "ref_id": "b14", "title": "Fab-map: Probabilistic localization and mapping in the space of appearance", "year": "2008" }, { "authors": "Marco Cuturi", "journal": "Advances in neural information processing systems", "ref_id": "b15", "title": "Sinkhorn distances: Lightspeed computation of optimal transport", "year": "2013" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly; Jakob Uszkoreit; Neil Houlsby", "journal": "", "ref_id": "b16", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2021" }, { "authors": "Dorian Gálvez; -López ; Juan D Tardos ", "journal": "IEEE Transactions on Robotics", "ref_id": "b17", "title": "Bags of binary words for fast place recognition in image sequences", "year": "2012" }, { "authors": "Sourav Garg; Tobias Fischer; Michael Milford", "journal": "", "ref_id": "b18", "title": "Where is your place, visual place recognition?", "year": "2021" }, { "authors": "Sourav Garg; Madhu Vankadari; Michael Milford", "journal": "PMLR", "ref_id": "b19", "title": "Seqmatchnet: Contrastive learning with sequence matching for place recognition & relocalization", "year": "2022" }, { "authors": "Kai Han; Yunhe Wang; Hanting Chen; Xinghao Chen; Jianyuan Guo; Zhenhua Liu; Yehui Tang; An Xiao; Chunjing Xu; Yixing Xu", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b20", "title": "A survey on vision transformer", "year": "2022" }, { "authors": "Stephen Hausler; Adam Jacobson; Michael Milford", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b21", "title": "Multi-process fusion: Visual place recognition using multiple image processing methods", "year": "2019" }, { "authors": "Stephen Hausler; Sourav Garg; Ming Xu; Michael Milford; Tobias Fischer", "journal": "", "ref_id": "b22", "title": "Patch-netvlad: Multi-scale fusion of locally-global descriptors for place recognition", "year": "2021" }, { "authors": "Ziyang Hong; Yvan Petillot; David Lane; Yishu Miao; Sen Wang", "journal": "", "ref_id": "b23", "title": "Textplace: Visual place recognition and topological localization through reading scene texts", "year": "2019" }, { "authors": "Arnold Irschara; Christopher Zach; Jan-Michael Frahm; Horst Bischof", "journal": "IEEE", "ref_id": "b24", "title": "From structure-from-motion point clouds to fast location recognition", "year": "2009" }, { "authors": "Hervé Jégou; Matthijs Douze; Cordelia Schmid; Patrick Pérez", "journal": "IEEE", "ref_id": "b25", "title": "Aggregating local descriptors into a compact image representation", "year": "2010" }, { "authors": "Markus Käppeler; Kürsat Petek; Niclas Vödisch; Wolfram Burgard; Abhinav Valada", "journal": "", "ref_id": "b26", "title": "Few-shot panoptic segmentation with foundation models", "year": "2023" }, { "authors": "Nikhil Keetha; Avneesh Mishra; Jay Karhade; Krishna Murthy Jatavallabhula; Sebastian Scherer; Madhava Krishna; Sourav Garg", "journal": "", "ref_id": "b27", "title": "Anyloc: Towards universal visual place recognition", "year": "2023" }, { "authors": "Ahmad Khaliq; Shoaib Ehsan; Zetao Chen; Michael Milford; Klaus Mcdonald-Maier", "journal": "IEEE transactions on robotics", "ref_id": "b28", "title": "A holistic visual place recognition approach using lightweight cnns for significant viewpoint and appearance changes", "year": "2019" }, { "authors": "Youngwan Lee; Jonghee Kim; Jeffrey Willette; Sung Ju Hwang", "journal": "", "ref_id": "b29", "title": "Mpvit: Multi-path vision transformer for dense prediction", "year": "2022" }, { "authors": "María Leyva-Vallina; Nicola Strisciuglio; Nicolai Petkov", "journal": "", "ref_id": "b30", "title": "Data-efficient large scale place recognition with graded similarity supervision", "year": "2023" }, { "authors": "Tsung-Yi Lin; Yin Cui; Serge Belongie; James Hays", "journal": "", "ref_id": "b31", "title": "Learning deep representations for ground-to-aerial geolocalization", "year": "2015" }, { "authors": "Ze Liu; Han Hu; Yutong Lin; Zhuliang Yao; Zhenda Xie; Yixuan Wei; Jia Ning; Yue Cao; Zheng Zhang; Li Dong", "journal": "", "ref_id": "b32", "title": "Swin transformer v2: Scaling up capacity and resolution", "year": "2022" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b33", "title": "Decoupled weight decay regularization", "year": "2017" }, { "authors": "Stephanie Lowry; Niko Sünderhauf; Paul Newman; John J Leonard; David Cox; Peter Corke; Michael J Milford", "journal": "ieee transactions on robotics", "ref_id": "b34", "title": "Visual place recognition: A survey", "year": "2015" }, { "authors": "Carlo Masone; Barbara Caputo", "journal": "IEEE Access", "ref_id": "b35", "title": "A survey on deep visual place recognition", "year": "2021" }, { "authors": "J Michael; Gordon F Milford; Wyeth", "journal": "IEEE", "ref_id": "b36", "title": "Seqslam: Visual route-based navigation for sunny summer days and stormy winter nights", "year": "2012" }, { "authors": "Ana C Murillo; Gautam Singh; Jana Kosecka; José Jesús Guerrero", "journal": "IEEE Transactions on Robotics", "ref_id": "b37", "title": "Localization in urban environments using a panoramic gist descriptor", "year": "2012" }, { "authors": "Kevin Musgrave; Serge Belongie; Ser-Nam Lim", "journal": "Springer", "ref_id": "b38", "title": "A metric learning reality check", "year": "2020" }, { "authors": "Maxime Oquab; Timothée Darcet; Théo Moutakanni; Huy Vo; Marc Szafraniec; Vasil Khalidov; Pierre Fernandez; Daniel Haziza; Francisco Massa; Alaaeldin El-Nouby", "journal": "", "ref_id": "b39", "title": "Dinov2: Learning robust visual features without supervision", "year": "2023" }, { "authors": "Rémi Pautrat; Iago Suárez; Yifan Yu; Marc Pollefeys; Viktor Larsson", "journal": "", "ref_id": "b40", "title": "Gluestick: Robust image matching by sticking points and lines together", "year": "2023" }, { "authors": "Ofir Pele; Michael Werman", "journal": "IEEE", "ref_id": "b41", "title": "Fast and robust earth mover's distances", "year": "2009" }, { "authors": "Noé Pion; Martin Humenberger; Gabriela Csurka; Yohann Cabon; Torsten Sattler", "journal": "IEEE", "ref_id": "b42", "title": "Benchmarking image retrieval for visual localization", "year": "2020" }, { "authors": "Filip Radenović; Giorgos Tolias; Ondřej Chum", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b43", "title": "Finetuning cnn image retrieval with no human annotation", "year": "2018" }, { "authors": "Ignacio Rocco; Mircea Cimpoi; Relja Arandjelović; Akihiko Torii; Tomas Pajdla; Josef Sivic", "journal": "Advances in neural information processing systems", "ref_id": "b44", "title": "Neighbourhood consensus networks", "year": "2018" }, { "authors": "Paul-Edouard Sarlin; Cesar Cadena; Roland Siegwart; Marcin Dymczyk", "journal": "", "ref_id": "b45", "title": "From coarse to fine: Robust hierarchical localization at large scale", "year": "2019" }, { "authors": "Paul-Edouard Sarlin; Daniel Detone; Tomasz Malisiewicz; Andrew Rabinovich", "journal": "", "ref_id": "b46", "title": "Superglue: Learning feature matching with graph neural networks", "year": "2020" }, { "authors": "Stefan Schubert; Peer Neubert; Sourav Garg; Michael Milford; Tobias Fischer", "journal": "IEEE Robotics & Automation Magazine", "ref_id": "b47", "title": "Visual Place Recognition: A Tutorial", "year": "2023" }, { "authors": "Shihao Shao; Kaifeng Chen; Arjun Karpur; Qinghua Cui; André Araujo; Bingyi Cao", "journal": "", "ref_id": "b48", "title": "Global features are all you need for image retrieval and reranking", "year": "2023" }, { "authors": "Richard Sinkhorn; Paul Knopp", "journal": "Pacific Journal of Mathematics", "ref_id": "b49", "title": "Concerning nonnegative matrices and doubly stochastic matrices", "year": "1967" }, { "authors": "Jiaming Sun; Zehong Shen; Yuang Wang; Hujun Bao; Xiaowei Zhou", "journal": "", "ref_id": "b50", "title": "Loftr: Detector-free local feature matching with transformers", "year": "2021" }, { "authors": "Niko Sünderhauf; Peter Protzel", "journal": "IEEE", "ref_id": "b51", "title": "Brief-gist-closing the loop by simple means", "year": "2011" }, { "authors": "Niko Sünderhauf; Sareh Shirazi; Feras Dayoub; Ben Upcroft; Michael Milford", "journal": "IEEE", "ref_id": "b52", "title": "On the performance of convnet features for place recognition", "year": "2015" }, { "authors": "Hajime Taira; Masatoshi Okutomi; Torsten Sattler; Mircea Cimpoi; Marc Pollefeys; Josef Sivic; Tomas Pajdla; Akihiko Torii", "journal": "", "ref_id": "b53", "title": "Inloc: Indoor visual localization with dense matching and view synthesis", "year": "2018" }, { "authors": "Marvin Teichmann; Andre Araujo; Menglong Zhu; Jack Sim", "journal": "", "ref_id": "b54", "title": "Detect-to-retrieve: Efficient regional aggregation for image search", "year": "2019" }, { "authors": "Akihiko Torii; Josef Sivic; Tomas Pajdla; Masatoshi Okutomi", "journal": "", "ref_id": "b55", "title": "Visual place recognition with repetitive structures", "year": "2013" }, { "authors": "Ruotong Wang; Yanqing Shen; Weiliang Zuo; Sanping Zhou; Nanning Zheng", "journal": "", "ref_id": "b56", "title": "Transvpr: Transformer-based place recognition with multi-level attention aggregation", "year": "2022" }, { "authors": "Xun Wang; Xintong Han; Weilin Huang; Dengke Dong; Matthew R Scott", "journal": "", "ref_id": "b57", "title": "Multi-similarity loss with general pair weighting for deep metric learning", "year": "2019" }, { "authors": "Frederik Warburg; Soren Hauberg; Manuel Lopez-Antequera; Pau Gargallo; Yubin Kuang; Javier Civera", "journal": "", "ref_id": "b58", "title": "Mapillary street-level sequences: A dataset for lifelong place recognition", "year": "2020" }, { "authors": "Jiankai Xing; Fujun Luan; Ling-Qi Yan; Xuejun Hu; Houde Qian; Kun Xu", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b59", "title": "Differentiable rendering using rgbxy derivatives and optimal transport", "year": "2022" }, { "authors": "Jingfeng Yao; Xinggang Wang; Shusheng Yang; Baoyuan Wang", "journal": "Information Fusion", "ref_id": "b60", "title": "Vitmatte: Boosting image matting with pretrained plain vision transformers", "year": "2024" }, { "authors": "Chao Zhang; Stephan Liwicki; Roberto Cipolla", "journal": "", "ref_id": "b61", "title": "Beyond the cls token: Image reranking using pretrained vision transformers", "year": "2022" }, { "authors": "Xiwu Zhang; Lei Wang; Yan Su", "journal": "Pattern Recognition", "ref_id": "b62", "title": "Visual place recognition: A survey from deep learning perspective", "year": "2021" }, { "authors": "Sijie Zhu; Linjie Yang; Chen Chen; Mubarak Shah; Xiaohui Shen; Heng Wang", "journal": "", "ref_id": "b63", "title": "R2former: Unified retrieval and reranking transformer for place recognition", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 352.11, 552.48, 193.01, 9.72 ], "formula_id": "formula_0", "formula_text": "s i = W s2 (σ(W s1 (t i ) + b s1 )) + b s2(1)" }, { "formula_coordinates": [ 4, 50.11, 97.56, 236.25, 45.9 ], "formula_id": "formula_1", "formula_text": "si,m+1 = z1 n (2) being 1 n = [1, . . . , 1] ⊤ ∈ R n a n-dimensional vector of ones." }, { "formula_coordinates": [ 4, 104.28, 288.51, 182.08, 12.06 ], "formula_id": "formula_2", "formula_text": "P1 m+1 = µ and P⊤ 1 n = κ.(3)" }, { "formula_coordinates": [ 4, 93.39, 558.23, 192.98, 9.72 ], "formula_id": "formula_3", "formula_text": "f i = W f2 (σ(W f1 (t i ) + b f1 )) + b f1(4)" }, { "formula_coordinates": [ 4, 126.07, 686.01, 160.29, 30.32 ], "formula_id": "formula_4", "formula_text": "V j,k = n i=1 P i,k • f i,k(5)" }, { "formula_coordinates": [ 4, 346.86, 146.14, 198.25, 9.72 ], "formula_id": "formula_5", "formula_text": "g = W g2 (σ(W g1 (t n+1 ) + b g1 )) + b g1(6)" } ]
[ { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b0", "b19", "b2", "b46", "b53", "b74", "b16", "b74", "b27", "b67", "b29", "b8", "b23", "b65", "b23", "b65" ], "table_ref": [], "text": "Cancer is one of the leading causes of death worldwide. Over the past decades, substantial endeavors have been made to detect cancers from histology images with the aim of improving survival rates through early screening. Identification of nuclear components in the histology landscape is often the first step toward a detailed analysis of histology images. Quantitative characterizations of nuclear morphology and structure play a pivotal role in cancer diagnosis, treatment planning, and survival analysis, which have been verified by a wide range of studies, see for example [Alberts et al., 2015]. However, large-scale analysis on the cell level is extremely labor-intensive and time-consuming since a whole slide image (WSI) typically contains tens of thousands of nuclei of various types. Moreover, such subjective interpretations have been demonstrated to suffer from large inter-and intraobserver variability [He et al., 2021]. Consequently, there is a compelling pursuit of precise automatic algorithms for nucleus instance segmentation to aid in histopathologic cancer diagnosis. Nonetheless, the blurred cell contours, overlapping cell clusters, and variances in nuclei staining, shape and size, pose substantial challenges for the developers.\nRecent years have witnessed significant advancements in the filed of nucleus instance segmentation owing to the impressive performances brought by various methods based on regression of nuclear proxy maps [Chen et al., 2016;Naylor et al., 2018;Schmidt et al., 2018;Zhou et al., 2019;Graham et al., 2019;Zhou et al., 2019;Ilyas et al., 2022;Chen et al., 2023b;Chen et al., 2023a] (see Fig. 1 (a)). Regrettably, these methods necessitate carefully crafted postprocessing to derive nuclear instances from the estimated maps. This step demands meticulous hyper-parameter tuning and is vulnerable to noise [Yao et al., 2023].\nRecently, the segment anything model (SAM) has emerged as a generic segmentation network for various image types, whose impressive generalization ability and versatility can be attributed to its structural design and the strong representation learned from 11M images annotated with 1B masks [Kirillov et al., 2023]. Several studies have been undertaken to investigate the zero-shot performance of SAM on nucleus segmentation [Deng et al., 2023] or transfer its well-learned representation to boost the segmentation accuracy [Hörst et al., 2023;Xu et al., 2023]. Specifically, [Hörst et al., 2023] reuses SAM's well-trained image encoder to construct a more pow-arXiv:2311.15939v4 [cs.CV] 24 Jan 2024 erful regression model and integrates it into the aforementioned nucleus instance segmentation workflow. Despite the promising results, we argue that this approach does not fully exploit the knowledge encapsulated in the integrated architecture of SAM. Conversely, [Xu et al., 2023] maintains the philosophy of SAM thoroughly. They fine-tune the entire SAM in a one-prompt-all-nuclei recipe for nucleus semantic segmentation. Nevertheless, this method expects users to supply precise prompts, which is impractical since crafting such prompts requires extensive medical expertise. Moreover, it falls short in providing nucleus instance information.\nIn this work, we propose to fine-tune SAM in a oneprompt-one-nucleus regime to fully unleash its potential for nucleus instance segmentation. To eliminate the need for crafted prompts during inference, we develop a prompter that automatically generates nuclei prompts by refining and classifying pre-defined anchor points on an input image. Specially, we incorporate an auxiliary task of nuclear region segmentation into prompter learning. This integration guides the model's attention towards foreground areas, thereby improving the quality of generated prompts. During the inference stage, the predicted nuclear region mask is further utilized to filter out false positive prompts. The consolidation of the prompter and segmentor (i.e., the fine-tuned SAM) establishes a novel solution for automatic nucleus instance segmentation. Given their linkage through nuclei prompts, we designate our approach as PromptNucSeg, and its pipeline is depicted in Fig. 1 (b). Compared to the currently prevailing methods, our approach does not require complex postprocessing. Moreover, we devise a trick that treats adjacent nuclei as negative prompts to improve the model's segmentation of overlapping nuclei.\nOur contributions can be summarized as follows: 2 Related Work" }, { "figure_ref": [], "heading": "Utilization of SAM for Medical Image segmentation", "publication_ref": [ "b29", "b25", "b43", "b6", "b61", "b63", "b69", "b39", "b33", "b8", "b65", "b23" ], "table_ref": [], "text": "Segment Anything Model (SAM) [Kirillov et al., 2023] is the first groundbreaking model for universal image segmentation. It has achieved impressive results on a wide range of natural image tasks. Nevertheless, due to the dramatic domain gap between natural and medical images, SAM's performance significantly declines when applied for medical image segmentation [Huang et al., 2023;Ma and Wang, 2023]. To bridge this gap, many studies opt to fine-tine SAM with meticulously curated medical data [Cheng et al., 2023;Wang et al., 2023;Wu et al., 2023;Zhang and Liu, 2023;Lin et al., 2023;Lei et al., 2023]. These works mainly focus on the segmentation of anatomical structures and organs in computed tomography (CT), magnetic resonance (MR) and ultrasound images.\nIn terms of histology images, [Deng et al., 2023] assesses SAM's performance for tumor, non-tumor tissue and nucleus segmentation. The results suggest that the vanilla SAM achieves remarkable segmentation performance for large connected tissue objects, however, it does not consistently achieve satisfactory results for dense nucleus instance segmentation. To tackle this issue, SPPNet [Xu et al., 2023] fine-tunes a distilled lightweight SAM [Zhang et al., 2023] in a one-prompt-all-nuclei manner for nucleus semantic segmentation. Despite the improved outcomes, this method relies on manual prompts and lacks the capacity to furnish nucleus instance information. [Hörst et al., 2023] builds a vision transformer-based U-Net-shaped model, employing SAM's pre-trained image encoder as its backbone to better fit the nuclear proxy maps. We argue that this approach underutilizes the knowledge embedded in SAM's integrated architecture." }, { "figure_ref": [], "heading": "Nucleus Instance Segmentation", "publication_ref": [ "b18", "b16", "b67", "b41", "b67", "b74", "b27", "b46", "b16", "b53", "b48", "b72", "b10", "b23", "b67" ], "table_ref": [], "text": "Current methods for nucleus instance segmentation can be divided into two categories: top-down and bottom-up.\nTop-down methods, such as Mask R-CNN [He et al., 2017], first predict nuclei bounding boxes from a global perspective, and then segment the nucleus instance within each box. Despite the great progress in natural image segmentation and the potential in dealing with overlapping nuclei, topdown methods have demonstrated deficiency on nucleus instance segmentation [Graham et al., 2019;Yao et al., 2023;Lou et al., 2023], attributed to two primary factors. First, on the data side, there are many severely overlapping nuclei in histology images. Consequently, a bounding-box proposal normally contains multiple nuclei with indistinct boundaries, making the network hard to optimize. Second, on the model side, top-down methods typically predict segmentation masks with a fixed resolution (e.g., 28×28 in Mask R-CNN). Subsequently, these masks undergo re-sampling to match the size of their corresponding bounding boxes. This re-sampling process might introduce quantization errors [Yao et al., 2023], posing challenges for accurately segmenting sinuous nuclear boundaries.\nBottom-up methods, initially regressing various types of nuclear proxy maps and then grouping pixels into individual instances through meticulous post-processing, have gained prominence in nucleus instance segmentation owing to their commendable accuracy. These approaches typically entail regressing a nucleus probability map, where the pixel values signify the presence of nuclei, along with some auxiliary maps facilitating the identification of nuclei instances. Specifically, DCAN [Chen et al., 2016], CIA-Net [Zhou et al., 2019], TSFD-Net [Ilyas et al., 2022] and HARU-Net [Chen et al., 2023a] predict the nuclear contour map. DIST [Naylor et al., 2018] regresses the intra-nuclear distance map. HoVer-Net [Graham et al., 2019] predicts horizontal and vertical distances of nuclei pixels to their center of mass. StarDist [Schmidt et al., 2018] and its extension CPP-Net [Chen et al., 2023b] predict distances from each fore- ground pixel to its associated instance boundary along a set of pre-defined directions. Under the premise of some above frameworks, other works [Qu et al., 2019;Zhao et al., 2020;Deshmukh et al., 2022;Hörst et al., 2023] put effort into constructing more favorable features or task-specific loss functions. Overall, while bottom-up methods have exhibited superior accuracy compared to top-down approaches, their accompanying post-processing requires tedious hyperparameter tuning [Yao et al., 2023], which presents a hurdle to their practical application.\nEssentially, our proposed PromptNucSeg belongs the topdown family. But inspired by the promptable property of SAM, we tackle this task from a new perspective. Instead of bounding boxes, we utilize center points to represent nuclei, which are easier to localize and can separate touching objects more precisely. In comparison with bottom-up methods, PromptNucSeg does not require intricate post-processing as the prompter generates point prompts for nuclei in a one-toone relationship and the segmentor predicts the nuclei mask guided by each prompt individually." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Preliminaries: SAM", "publication_ref": [ "b29" ], "table_ref": [], "text": "SAM [Kirillov et al., 2023] consists of three sub-networks, i.e., image encoder F, prompt encoder P and mask decoder M. The image encoder transforms an input image I ∈ R H×W ×3 into an image embedding. The prompt encoder maps diverse prompts (e.g., a set of positive/negative points, a rough box or mask, free-form text, or combinations thereof) into a compact prompt embedding. Positive prompts indicate regions representing the region-of-interest (ROI) object, whereas negative prompts emphasize areas that should be suppressed as background. Given the image and prompt embedding as input, the mask decoder generates the mask for the ROI object in conjunction with a confidence score (i.e., an estimated IoU)." }, { "figure_ref": [ "fig_1" ], "heading": "Adapt SAM to nucleus instance segmentation", "publication_ref": [ "b6", "b25", "b6", "b65", "b44" ], "table_ref": [], "text": "Despite SAM's remarkable segmentation performance across numerous natural images, recent studies have highlighted its subpar performance on medical images due to the significant domain gap [Cheng et al., 2023;Huang et al., 2023]. A specific observation worth noting is that the objects in SAM's pre-training data are primarily captured in natural scenes, displaying nicely delineated boundaries, while the boundaries of organs or nuclei in medical images are often ambiguous [Cheng et al., 2023;Xu et al., 2023]. To enhance the capability of SAM for nucleus segmentation, we fine-tune it on nucleus instance segmentation datasets to incorporate essential domain-specific knowledge into the model.\nThe fine-tuning procedure is depicted in Fig. 2 (a). Specifically, for each image-label pair (x, y) in a mini-batch, we randomly select Z nucleus instances from the instance map y. Subsequently, a positive point prompt is randomly sampled from the foreground area of each instance. Taking the image x and the point prompt p z as input, we fine-tune SAM to predict the mask of z-th nucleus instance.\nÕz = M (F (x) , P ({p z }) , [mask], [IoU])(1)\nwhere [mask] and [IoU] separately represent the learnable mask and IoU token pre-set in SAM's mask decoder. Õz denotes the predicted mask of the z-th nucleus. We supervise the mask and IoU prediction with the same loss as SAM.\nL sam = ωFL ( Õz , O z ) + DL ( Õz , O z ) + MSE (ν, ν) (2)\nwhere FL, DL and MSE stand for focal loss [Lin et al., 2017b], dice loss [Milletari et al., 2016] and mean-squareerror loss, respectively. O z is the ground-truth mask of the z-th nucleus, ν and ν signify the estimated and actual IoU between Õz and O z , respectively. ω is a weight term. In this work, we opt to freeze the prompt encoder while updating the image encoder and mask decoder via gradient descent." }, { "figure_ref": [ "fig_1" ], "heading": "Learn Prompter", "publication_ref": [ "b55", "b55" ], "table_ref": [], "text": "Generating a unique point prompt for each nucleus is de facto a non-trivial problem. In this study, we choose the nuclear centroid as its prompt for simplicity. To achieve automatic prompt generation, we draw inspiration from [Song et al., 2021] and develop a prompter to predict nuclear centroid coordinates and categories by refining and classifying a set of anchor points placed on an input image. In the following content, we denote the set of anchor points as A = {a i } M i=1 and the set of ground-truth points as\nB = {b i } N i=1\n, where b i is extracted from y as the centroid of the i-th nucleus.\nThe prompter learning procedure is depicted in Fig. 2 (b). Specifically, we begin with placing anchor points on an input image x with a step of λ pixels. Then, an image encoder F ′ is employed to construct hierarchical feature maps {P j } L j=2 from x, where the size of P j is (H/2 j , W /2 j ). Following this, we apply the bilinear interpolation method to extract the multi-scale feature vectors {f i,j } L j=2 for anchor point a i according to its normalized coordinates on the feature pyramid. Finally, we concatenate {f i,j } L j=2 and fed it into two dedicated MLP heads for decoding offsets δ i and logits q i ∈ R C+1 with respect to a i , where C is the number of nuclear categories and the extra class is background.\nSince the goal of prompter is to associate a unique point prompt for each nucleus, which anchor point in A should be chosen as the prompt is the key in prompter learning. In principal, for any nucleus centroid in B, the anchor point with lower distance and higher categorical similarity with it is preferred to be chosen. Consequently, the association can be completed by computing the maximum-weight matching ϕ = {(a σ(i) , b i )} N i=1 in a weighted bipartite graph G = (A, B, E), where the weight w i,j of edge connecting vertex a i and b j is defined as:\nw i,j = q i (c j ) -α||â i -b j || 2 (3)\nin which c j is the class of the j-th nucleus, âi = a i + δ i represents the refined position of the i-th anchor point, q i (c j ) is the c j -th element of q i , α is a weight term and || ⋅ || 2 denotes l 2 distance. We use the Hungarian algorithm [Song et al., 2021] to determine ϕ in this work. As a result, the objective of prompter is concretized as narrowing the positional and categorical difference between the selected anchor points and their matched nuclei, while ignoring the unselected anchor points as background. This objective can be achieved by minimizing the following losses.\nL cls = - 1 M ⎛ ⎜ ⎝ N ∑ i=1 log q σ(i) (c i ) + β ∑ a i ∈A ′ log q i (∅) ⎞ ⎟ ⎠ L reg = γ N N ∑ i=1 ||â σ(i) -b i || 2 (4)\nwhere A ′ ⫋ A represents the set of unselected anchor points, ∅ indicates the background class, β and γ are free parameters used to relieve the class imbalance and modulate the effect of regression loss, respectively. Auxiliary task of nuclear region segmentation The training process of the above prompter only involves the nuclear categorical labels and centroid coordinates. However, in the context of nucleus instance segmentation, the mask for each nucleus is also available, which provides rich details about nuclear size, shape and so on. To integrate this valuable information into prompter learning, we construct a simple auxiliary task of nuclear region segmentation to enhance the model's attention to foreground areas and perception of nuclear morphological characteristics. Technically, we introduce a mask head structured as Conv-BN-ReLu-Conv to predict the nuclear probability map Ŝ from P 2 , informed by that the high-resolution P 2 contains abundant fine-grained features crucial for medical image segmentation [Lin et al., 2017a]. We apply the focal loss to supervise the learning of the auxiliary task.\nL aux = FL ( Ŝ, S)(5)\nwhere the ground-truth mask S is derived from the instance map y via a simple thresholding operation. The final loss used to optimize the prompter is\nL prompter = L reg + L cls + L aux (6)\nMask-aided prompt filtering Due to insufficient optimization, the prompter would inevitably produce false positive prompts that actually represent non-nucleus objects. To mitigate this issue, we utilize the nuclear probability map predicted by the auxiliary branch to filter out these incorrect predictions. This is achieved by retaining only those prompts with probability values exceeding 0.5 in the inference stage." }, { "figure_ref": [ "fig_2", "fig_2", "fig_2" ], "heading": "Use adjacent nuclei as negative prompts", "publication_ref": [ "b16", "b23", "b21" ], "table_ref": [], "text": "Distinguishing overlapping nuclei is a long-standing challenge in the community of nucleus instance segmentation [Graham et al., 2019;Hörst et al., 2023;He et al., 2023].\nOur approach encounters this challenge as well. Given a finetuned SAM, considering a real-world scenario of two overlapping nuclei in Fig. 3 (a), prompting each nucleus with a single positive prompt results in an over-segmented mask due to the faint boundary, as depicted in Fig. 3 (b). An intuitive idea to resolve this problem is to include the overlapping nucleus as negative prompt to suppress excessive segmentation for the ROI nucleus, as illustrated in Fig. 3 (c).\nNevertheless, the implementation of this idea presents two practical challenges. (1) In the inference phase, it is unknown which nuclei overlap with a ROI nucleus. (2) We empirically observe that including negative prompts solely at test time cannot effectively prevent over-segmented prediction for overlapping nuclei. The inefficiency stems from that the finetuning process involving only positive prompts (see Eq. 1) causes a catastrophic forgetting about the effect of negative prompts.\nTo deal with (1), let pz denote the generated point prompt for the z-th nucleus in a test image, we approximately employ the K points nearest to pz as negative prompts for segmenting this nucleus. To address (2), we incorporate negative prompts into the fine-tuning stage in a similar way. Specifically, we randomly sample a point from each nucleus instance in y and utilize the positive prompt p z along with its K-nearest points {n z,k } K k=1 as negative prompts to predict the mask of the zth nucleus. As a result, we re-formulate the model's forward process described by Eq. 1 as\nÕz = M (F (x) , P ({p z } ∪ {n z,k } K k=1 ) , [mask], [IoU])(7)" }, { "figure_ref": [], "heading": "Inference", "publication_ref": [ "b39", "b6", "b57" ], "table_ref": [], "text": "In line with [Lin et al., 2023;Cheng et al., 2023], we scale down the input spatial resolution of SAM from 1024×1024 to 256×256 for more clinical-friendly deployment of our method. This adaption brings a substantial reduction in GPU memory due to the shorter input sequence in attention layers.\nGiven an image of arbitrary resolution in the inference stage, we first employ the prompter to predict nuclei prompts from a global view. If the longer side of the image is more than 256 pixel, we do not directly apply the segmentor on it to avoid the performance degradation resulting from the interpolation of positional embedding [Su et al., 2021]. Instead, we partition the image into tiles of size 256×256 and gather the masks predicted within each local view. To ensure any nucleus can appear completely in at least one tile, the partitioning is performed in a sliding window manner modulated by an overlapping size of ϵ. In the end, we apply a simple nonmaximum suppression to eliminate duplicate predictions." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experiments Settings", "publication_ref": [ "b31", "b59", "b12", "b14", "b14" ], "table_ref": [], "text": "Datasets Kumar [Kumar et al., 2017] contains 30 H&E stained images (size: 1000×1000) with 21,623 annotated nuclei. The training and test set contains 16 and 14 images, respectively. CPM-17 [Vu et al., 2019] comprises 64 H&E stained images (size: 500×500 or 600×600) with 7,570 annotated nuclei. Both the training and test sets contain 32 images. PanNuke [Gamper et al., 2019;Gamper et al., 2020] is regarded as one of the most challenging datasets for simultaneous nucleus instance segmentation and classification. The dataset includes 7,899 H&E stained images of size 256×256 and 189,744 nuclei classified into five classes. We use the same three-fold cross-validation splits as [Gamper et al., 2020] and report the averaged results across these three splits." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b16" ], "table_ref": [], "text": "Following prior works, we adopt Aggregated Jaccard Index (AJI) and Panoptic Quality (PQ) for comparison. Since AJI suffers from the over-penalization issue in overlapping regions [Graham et al., 2019], we consider PQ as the primary metric. Implementation details is available in our supplementary material." }, { "figure_ref": [], "heading": "Comparison with SOTA Methods", "publication_ref": [ "b23", "b52" ], "table_ref": [], "text": "We use PromptNucSeg-B/L/H to differentiate our method with fine-tuned SAM-B/L/H as the nucleus segmentor. Tab. 1 shows the quantitative comparison results of our approach with SOTA methods on the challenging PanNuke dataset. Without additional techniques such as stain normalization, oversampling or adding an auxiliary tissue classification branch [Hörst et al., 2023], PromptNucSeg-H outperforms the previous best models by 1.1 bPQ and 1.4 mPQ. Moreover, we report the detection and segmentation performance of various methods for each type of nuclei in Tabs. 2 and 3. In a nutshell, our method achieves the highest F1 scores across all five classes for nucleus detection and the highest PQ scores for four out of the five categories in terms of nucleus segmentation. Tab. 4 exhibits the comparison results on the Kumar and CPM-17 benchmarks. In case of the Kumar dataset, our method outshines the runner-up by 0.1 points on AJI and 0.6 points on PQ. Moreover, it demonstrates a substantial improvement on the CPM-17 dataset, exceeding the second-highest AJI and PQ scores by 1.9 and 2.8 points, respectively. Due to the limited pages, we present the qualitative comparison results in the supplementary material.\nWe further analyze the model size, computational cost and inference efficiency of different methods on the PanNuke dataset in Tab. 5. The counterparts demonstrate significantly higher MACs since they generally adopt the U-Net [Ronneberger et al., 2015] architecture with progressive upsampling to regress high-resolution nuclear proxy maps. Besides, they manifest slower inference speed due to the accompanying CPU-intensive post-processing steps. In comparison, PromptNucSeg is cost-effective and efficient since it predicts nuclei prompts and their associated masks directly from hidden features of low resolution." }, { "figure_ref": [ "fig_3" ], "heading": "Ablation Studies", "publication_ref": [ "b18", "b46", "b53", "b50", "b16", "b18", "b46", "b50", "b74", "b48", "b16", "b72", "b10" ], "table_ref": [], "text": "On top of PromptNucSeg-H, we ablate the effect of our proposed modules on the CPM-17 dataset, which involve finetuning SAM (FT), auxiliary task learning of nuclear region segmentation (AUX), mask-aided prompt filtering (MAPF), and incorporation of negative prompts (NP). The experimental results in Tab. 6 demonstrate that all the proposed modules contribute to improving the performance of our model. Fig. 4 visualizes the effect of NP. Effect of the number of negative prompts We investigate the performance of PromptNucSeg-H with varying numbers of negative prompts on the CPM-17 dataset, as detailed in Tab. 7.\nWe initially assess the practical performance of our method by feeding predicted nuclei prompts into the segmentor. The [He et al., 2017] 0.76 0.68 0.72 0.55 0.63 0.59 0.52 0.52 0.52 0.46 0.54 0.50 0.42 0.43 0.42 0.17 0.30 0.22 DIST [Naylor et al., 2018] 0.74 0.71 0.73 0.49 0.55 0.50 0.38 0.33 0.35 0.42 0.45 0.42 0.42 0.37 0.39 0.00 0.00 0.00 StarDist [Schmidt et al., 2018] 0.85 0.80 0.82 0.69 0.69 0.69 0.73 0.68 0.70 0.62 0.53 0.57 0.54 0.49 0.51 0.39 0.09 0.10 Micro-Net [Raza et al., 2019] 0.78 0.82 0.80 0.59 0.66 0.62 0.63 0.54 0.58 0.59 0.46 0.52 0.50 0.45 0.47 0.23 0.17 0.19 Hover-Net [Graham et al., 2019] results in Rows 1-3 discover that adding negative prompts solely in the inference stage cannot enhance the model's performance. We speculate that fine-tuning with only positive prompts results in a catastrophic forgetting about the effect of negative prompts. Comparing Rows 5 and 6, as well as Rows 8 and 9, we find that employing 1 negative prompt yields better outcomes than using 2 negative prompts. We posit that this discrepancy arises from the inherent noise in predicted prompts, the introduction of which is particularly notable when using two negative prompts.\nTo verify our suspicions, we further test the \"oracle\" performance of our method by using ground-truth nuclear centroids as prompts for the segmentor. Comparing Rows 2 and 5, we observe the \"oracle\" performance significantly improves when negative prompts are integrated into the finetuning process. This observation confirms the existence of the catastrophic forgetting problem explained earlier. Examining Rows 4-6 and 7-9, we find that when the prompts are noisefree, the \"oracle\" performance continually improves with the number of negative prompts. This finding substantiates our second suspicion.\nThe substantial gaps between practical and \"oracle\" performance underscore the impact of prompt quality on the overall system performance. Given that training the prompter necessitates only nuclei point annotations, it is promising to improve the nucleus instance segmentation outcomes in a costeffective scheme by bolstering the prompter's accuracy with more budget-friendly point labels. Which module should undertake the nucleus classification task? Answer is the prompter. In prior experiments, we employ the prompter for nucleus classification. Here we explore the performance of PromptNucSeg when training the prompter in a class-agnostic manner and transferring the classification function to the segmentor. To adapt the classagnostic SAM for nucleus classification, we append a [cls] token to the mask decoder and update it in the same way as the [mask] and [IoU] tokens. Subsequently, the updated [cls] [He et al., 2017] 0.546 0.509 0.684 0.674 DIST [Naylor et al., 2018] 0.559 0.443 0.616 0.504 Micro-Net [Raza et al., 2019] 0.560 0.519 0.668 0.661 CIA-Net [Zhou et al., 2019] 0.620 0.577 --Full-Net [Qu et al., 2019] 0.601 0.620 0.702 0.686 Hover-Net [Graham et al., 2019] 0.618 0.597 0.705 0.697 Triple U-Net [Zhao et al., 2020] 0.621 0.601 0.711 0.685 FEEDNet [Deshmukh et al., 2022] token is fed into a MLP head to predict the categorical logits. We incorporate a multi-class focal loss of weight 1 into Eq. 2 to supervise the classification learning. Tab. 8 displays the performance of PromptNucSeg-H on the PanNuke dataset when the prompter and segmentor are responsible for nucleus classification, respectively. The results suggest a slight performance advantage of using the prompter for nucleus classification over the segmentor.\nFurther ablation studies By sharing the image encoder of the prompter and segmentor, we explore the performance of PromptNucSeg trained in an end-to-end style. In addition, we scrutinize the impact of overlapping size between sliding windows on model performance. The results and discussions are detailed in the supplementary material. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b35", "b43", "b6" ], "table_ref": [], "text": "In this paper, we have presented PromptNucSeg, a SAMinspired method for automatic nucleus instance segmentation in histology images. Architecturally, PromptNucSeg consists of two parts: a prompter generating a distinct point prompt for each nucleus, and a segmentor predicting nuclear masks driven by these prompts. Extensive experiments across three benchmarks document the superiority of PromptNucSeg. Limitations and future work. Despite the compelling accuracy and speed, PromptNucSeg exhibits a slight drawback in terms of model size compared to its counterparts, resulting in increased storage and transmission costs. For future work, we plan to diminish the model size through pruning, devise more powerful prompter, and explore the performance of PromptNucSeg built up other SAM-like pre-trained models, such as Semantic-SAM [Li et al., 2023], MedSAM [Ma and Wang, 2023] and SAM-Med2D [Cheng et al., 2023]." } ]
Nucleus instance segmentation in histology images is crucial for a broad spectrum of clinical applications. Current dominant algorithms rely on regression of nuclear proxy maps. Distinguishing nucleus instances from the estimated maps requires carefully curated post-processing, which is errorprone and parameter-sensitive. Recently, the Segment Anything Model (SAM) has earned huge attention in medical image segmentation, owing to its impressive generalization ability and promptable property. Nevertheless, its potential on nucleus instance segmentation remains largely underexplored. In this paper, we present a novel prompt-driven framework that consists of a nucleus prompter and SAM for automatic nucleus instance segmentation. Specifically, the prompter learns to generate a unique point prompt for each nucleus while the SAM is fine-tuned to output the corresponding mask for the prompted nucleus. Furthermore, we propose the inclusion of adjacent nuclei as negative prompts to enhance the model's capability to identify overlapping nuclei. Without complicated post-processing, our proposed method sets a new state-of-the-art performance on three challenging benchmarks. Code is available at github.
Unleashing the Power of Prompt-driven Nucleus Instance Segmentation
[ { "figure_caption": "Figure 1 :1Figure 1: Pipeline comparison with currently prevailing nucleus instance segmentation algorithms.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: (a) The fine-tuning process of SAM. Mask2Prompt signifies randomly sampling a positive point prompt from the foreground area of each nucleus mask. (b) The training procedure of the nucleus prompter. The integration of these two models enables automatic nucleus instance segmentation, as illustrated in Fig. 1 (b).", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: (a) Ground-truth boundary of two overlapping nuclei. (b) Predicted boundary by prompting each nucleus with a positive prompt inside it. (c) Predicted boundary by prompting each nucleus with an additional negative prompt inside its overlapping nucleus. • Positive prompt • Negative prompt", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Three cases of our method w/ and w/o using negative prompts. The images marked by GT imply the ground-truth nuclear boundaries, while the others indicate predicted outcomes given different types of prompts. • Positive prompt • Negative prompt", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "He et al., 2017] [Schmidt et al., 2018] [Graham et al., 2019] [Chen et al., 2023b] [Yao et al., 2023] [Hörst et al., 2023] Performance comparison on the PanNuke dataset. Following[Chen et al., 2023b;Hörst et al., 2023], both binary PQ (bPQ) and multi-class PQ (mPQ) are computed for evaluation. The best and second-best PQ scores are highlighted in bold and underlined.", "figure_data": "Mask R-CNNStarDistHover-NetCPP-NetPointNu-NetCellViT-HPromptNucSeg-HTissue[(Ours)bPQmPQbPQmPQbPQmPQbPQmPQbPQmPQbPQmPQbPQmPQAdrenal0.5546 0.3470 0.69720.48680.69620.48120.70660.49440.7134 0.5115 0.70860.51340.7227 0.5128Bile Duct0.5567 0.3536 0.66900.46510.66960.47140.67680.46700.6814 0.4868 0.67840.48870.6976 0.5012Bladder0.6049 0.5065 0.69860.57930.70310.57920.70530.59360.7226 0.6065 0.70680.58440.7212 0.6043Breast0.5574 0.3882 0.66660.50640.64700.49020.67470.50900.6709 0.5147 0.67480.51800.6842 0.5322Cervix0.5483 0.3402 0.66900.46280.66520.44380.69120.47920.6899 0.5014 0.68720.49840.6983 0.5118Colon0.4603 0.3122 0.57790.42050.55750.40950.59110.43150.5945 0.4509 0.59210.44850.6096 0.4690Esophagus0.5691 0.4311 0.66550.53310.64270.50850.67970.54490.6766 0.5504 0.66820.54540.6920 0.5711Head & Neck 0.5457 0.3946 0.64330.47680.63310.45300.65230.47060.6546 0.4838 0.65440.49130.6695 0.5104Kidney0.5092 0.3553 0.69980.48800.68360.44240.70670.51940.6912 0.5066 0.70920.53660.7115 0.5786Liver0.6085 0.4103 0.72310.51450.72480.49740.73120.51430.7314 0.5174 0.73220.52240.7372 0.5333Lung0.5134 0.3182 0.63620.41280.63020.40040.63860.42560.6352 0.4048 0.64260.43140.6580 0.4398Ovarian0.5784 0.4337 0.66680.52050.63090.48630.68300.53130.6863 0.5484 0.67220.53900.6856 0.5442Pancreatic0.5460 0.3624 0.66010.45850.64910.46000.67890.47060.6791 0.4804 0.66580.47190.6863 0.4974Prostate0.5789 0.3959 0.67480.50670.66150.51010.69270.53050.6854 0.5127 0.68210.53210.6983 0.5456Skin0.5021 0.2665 0.62890.36100.62340.34290.62090.35740.6494 0.4011 0.65650.43390.6613 0.4113Stomach0.5976 0.3684 0.69440.44770.68860.47260.70670.45820.7010 0.4517 0.70220.47050.7115 0.4559Testis0.5420 0.3512 0.68690.49420.68900.47540.70260.49310.7058 0.5334 0.69550.51270.7151 0.5474Thyroid0.5712 0.3037 0.69620.43000.69830.43150.71550.43920.7076 0.4508 0.71510.45190.7218 0.4721Uterus0.5589 0.3683 0.65990.44800.63930.43930.66150.47940.6634 0.4846 0.66250.47370.6743 0.4955Average0.5528 0.3688 0.66920.47440.65960.46290.67980.48470.6808 0.4957 0.67930.49800.6924 0.5123Std0.0076 0.0047 0.00140.00370.00360.00760.00150.00590.0050 0.0082 0.03180.04130.0093 0.0147MethodDetectionNeoplasticEpithelialClassification InflammatoryConnectiveDeadPRF1PRF1PRF1PRF1PRF1PRF1Mask-RCNN", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Precision (P), Recall (R) and F1-score (F1) for detection and classification across three folds for each nucleus type. The best F1-score is in bold while the second best is underlined. Following[Graham et al., 2019], if a detected nucleus is within a valid distance (≈3µm) from an annotated nucleus and the nuclear class matches, it is counted as a true positive (TP), otherwise a false positive(FP).", "figure_data": "MethodClass Neoplastic Epithelial Inflammatory Connective DeadMask-RCNN0.4720.4030.2900.3000.069DIST0.4390.2900.3430.2750.000StarDist0.5470.5320.4240.3800.123Micro-Net0.5040.4420.3330.3340.051HoVer-Net0.5510.4910.4170.3880.139CPP-Net0.5710.5650.4050.3950.131PointNu-Net0.5780.5770.4330.4090.154CellViT-H0.5810.5830.4170.4230.149PromptNucSeg-H0.5980.5820.4410.4330.161", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Average PQ across three folds for each nuclear category on the PanNuke dataset.", "figure_data": "", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Performance comparison on Kumar and CPM-17 datasets. The experimental results rendered with blue are omitted from the comparison because they are obtained with test-time augmenta-", "figure_data": "0.616 0.613 0.701 0.705HARU-Net [Chen et al., 2023a]0.613 0.572 0.721 0.701PointNu-Net [Yao et al., 2023]0.606 0.603 0.712 0.706PromptNucSeg-B (Ours)0.614 0.620 0.731 0.726PromptNucSeg-B + TTA0.620 0.627 0.734 0.730PromptNucSeg-L (Ours)0.621 0.626 0.734 0.730PromptNucSeg-L + TTA0.625 0.631 0.740 0.735PromptNucSeg-H (Ours)0.622 0.627 0.740 0.733PromptNucSeg-H + TTA0.624 0.631 0.743 0.737MethodParams (M) MACs (G) FPSmPQStarDist122.8263.6170.4744HoVer-Net37.6150.070.4629CPP-Net122.8264.4140.4847PointNu-Net158.1335.1110.4957CellViT-B142.9232.0200.4923PromptNucSeg-B145.659.0270.5095", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Comparison of model size, computational cost, efficiency and performance on the PanNuke dataset. All metrics are measured on a single NVIDIA RTX 3090 GPU.", "figure_data": "FT AUX MAPF NPAJIPQ0.319 0.223✓0.728 0.723✓✓0.734 0.727✓✓✓0.737 0.731✓✓✓✓0.740 0.733", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Effect of our proposed modules.", "figure_data": "", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Effect of the number of negative prompts.", "figure_data": "ClassifierTissue bPQ mPQ Neop. Epit.Nucleus Infl.Conn. DeadPrompter0.692 0.512 0.598 0.582 0.441 0.433 0.161Segmentor 0.688 0.506 0.587 0.587 0.423 0.431 0.157", "figure_id": "tab_9", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Model performance with the prompter and segmentor as nucleus classifier, respectively.", "figure_data": "", "figure_id": "tab_10", "figure_label": "8", "figure_type": "table" } ]
Zhongyi Shui; Yunlong Zhang; Kai Yao; Chenglu Zhu; Sunyi Zheng; Jingxiong Li; Honglin Li; Yuxuan Sun; Ruizhe Guo; Lin Yang
[ { "authors": " Alberts", "journal": "", "ref_id": "b0", "title": "", "year": "2015" }, { "authors": "Bruce Alberts; Dennis Bray; Karen Hopkin; Alexander D Johnson; Julian Lewis; Martin Raff; Keith Roberts; Peter Walter", "journal": "Garland Science", "ref_id": "b1", "title": "Essential cell biology", "year": "2015" }, { "authors": "Chen ", "journal": "", "ref_id": "b2", "title": "", "year": "2016" }, { "authors": "Xiaojuan Hao Chen; Lequan Qi; Pheng-Ann Yu; Heng", "journal": "", "ref_id": "b3", "title": "Dcan: deep contour-aware networks for accurate gland segmentation", "year": "2016" }, { "authors": "Chen ", "journal": "", "ref_id": "b4", "title": "Enhancing nucleus segmentation with haru-net: A hybrid attention based residual u-blocks network", "year": "2023" }, { "authors": "Chen ", "journal": "IEEE Transactions on Image Processing", "ref_id": "b5", "title": "Cpp-net: Context-aware polygon proposal network for nucleus segmentation", "year": "2023" }, { "authors": " Cheng", "journal": "", "ref_id": "b6", "title": "", "year": "2023" }, { "authors": "Junlong Cheng; Jin Ye; Zhongying Deng; Jianpin Chen; Tianbin Li; Haoyu Wang; Yanzhou Su; Ziyan Huang; Jilong Chen; Lei Jiang", "journal": "", "ref_id": "b7", "title": "Sammed2d", "year": "2023" }, { "authors": " Deng", "journal": "", "ref_id": "b8", "title": "", "year": "2023" }, { "authors": "Ruining Deng; Can Cui; Quan Liu; Tianyuan Yao; Lucas W Remedios; Shunxing Bao; Lee E Bennett A Landman; Lori A Wheless; Keith T Coburn; Wilson", "journal": "", "ref_id": "b9", "title": "Segment anything model (sam) for digital pathology: Assess zero-shot segmentation on whole slide imaging", "year": "2023" }, { "authors": " Deshmukh", "journal": "", "ref_id": "b10", "title": "", "year": "2022" }, { "authors": "Gayatri Deshmukh; Onkar Susladkar; Dhruv Makwana; Sparsh Mittal", "journal": "Physics in Medicine & Biology", "ref_id": "b11", "title": "Feednet: A feature enhanced encoder-decoder lstm network for nuclei instance segmentation for histopathological diagnosis", "year": "2022" }, { "authors": " Gamper", "journal": "", "ref_id": "b12", "title": "", "year": "2019" }, { "authors": "Jevgenij Gamper; Alemi Navid; Ksenija Koohbanani; Ali Benet; Nasir Khuram; Rajpoot", "journal": "Springer", "ref_id": "b13", "title": "Pannuke: an open pan-cancer histology dataset for nuclei instance segmentation and classification", "year": "2019-04-10" }, { "authors": " Gamper", "journal": "", "ref_id": "b14", "title": "", "year": "2020" }, { "authors": "Jevgenij Gamper; Alemi Navid; Ksenija Koohbanani; Simon Benes; Mostafa Graham; Jahanifar; Ali Syed; Ayesha Khurram; Katherine Azam; Nasir Hewitt; Rajpoot", "journal": "", "ref_id": "b15", "title": "Pannuke dataset extension, insights and baselines", "year": "2020" }, { "authors": " Graham", "journal": "", "ref_id": "b16", "title": "", "year": "2019" }, { "authors": "Simon Graham; Quoc Dang Vu; Shan E ; Ahmed Raza; Ayesha Azam; Yee Wah Tsang; Jin Tae Kwak; Nasir Rajpoot", "journal": "Medical image analysis", "ref_id": "b17", "title": "Hover-net: Simultaneous segmentation and classification of nuclei in multi-tissue histology images", "year": "2019" }, { "authors": " He", "journal": "", "ref_id": "b18", "title": "Mask r-cnn", "year": "2017" }, { "authors": " He", "journal": "", "ref_id": "b19", "title": "", "year": "2021" }, { "authors": "Hongliang He; Zhongyi Huang; Yao Ding; Guoli Song; Lin Wang; Qian Ren; Pengxu Wei; Zhiqiang Gao; Jie Chen", "journal": "", "ref_id": "b20", "title": "Cdnet: Centripetal direction network for nuclear instance segmentation", "year": "2021" }, { "authors": " He", "journal": "", "ref_id": "b21", "title": "", "year": "2023" }, { "authors": "Hongliang He; Jun Wang; Pengxu Wei; Fan Xu; Xiangyang Ji; Chang Liu; Jie Chen", "journal": "", "ref_id": "b22", "title": "Toposeg: Topology-aware nuclear instance segmentation", "year": "2023" }, { "authors": " Hörst", "journal": "", "ref_id": "b23", "title": "", "year": "2023" }, { "authors": "Fabian Hörst; Moritz Rempe; Lukas Heine; Constantin Seibold; Julius Keyl; Giulia Baldini; Selma Ugurel; Jens Siveke; Barbara Grünwald; Jan Egger", "journal": "", "ref_id": "b24", "title": "Cellvit: Vision transformers for precise cell segmentation and classification", "year": "2023" }, { "authors": " Huang", "journal": "", "ref_id": "b25", "title": "", "year": "2023" }, { "authors": "Yuhao Huang; Xin Yang; Lian Liu; Han Zhou; Ao Chang; Xinrui Zhou; Rusi Chen; Junxuan Yu; Jiongquan Chen; Chaoyu Chen", "journal": "", "ref_id": "b26", "title": "Segment anything model for medical images? Medical Image Analysis", "year": "2023" }, { "authors": " Ilyas", "journal": "", "ref_id": "b27", "title": "", "year": "2022" }, { "authors": "Talha Ilyas; Zubaer Ibna Mannan; Abbas Khan; Sami Azam; Hyongsuk Kim; Friso De Boer", "journal": "Neural Networks", "ref_id": "b28", "title": "Tsfd-net: Tissue specific feature distillation network for nuclei segmentation and classification", "year": "2022" }, { "authors": " Kirillov", "journal": "", "ref_id": "b29", "title": "", "year": "2023" }, { "authors": "Alexander Kirillov; Eric Mintun; Nikhila Ravi; Hanzi Mao; Chloe Rolland; Laura Gustafson; Tete Xiao; Spencer Whitehead; Alexander C Berg; Wan-Yen Lo", "journal": "", "ref_id": "b30", "title": "Segment anything", "year": "2023" }, { "authors": " Kumar", "journal": "", "ref_id": "b31", "title": "", "year": "2017" }, { "authors": "Neeraj Kumar; Ruchika Verma; Sanuj Sharma; Surabhi Bhargava; Abhishek Vahadane; Amit Sethi", "journal": "IEEE transactions on medical imaging", "ref_id": "b32", "title": "A dataset and a technique for generalized nuclear segmentation for computational pathology", "year": "2017" }, { "authors": " Lei", "journal": "", "ref_id": "b33", "title": "", "year": "2023" }, { "authors": "Wenhui Lei; Xu Wei; Xiaofan Zhang; Kang Li; Shaoting Zhang", "journal": "", "ref_id": "b34", "title": "Medlsam: Localize and segment anything model for 3d medical images", "year": "2023" }, { "authors": " Li", "journal": "", "ref_id": "b35", "title": "", "year": "2023" }, { "authors": "Feng Li; Hao Zhang; Peize Sun; Xueyan Zou; Shilong Liu; Jianwei Yang; Chunyuan Li; Lei Zhang; Jianfeng Gao", "journal": "", "ref_id": "b36", "title": "Semantic-sam: Segment and recognize anything at any granularity", "year": "2023" }, { "authors": "Lin ", "journal": "", "ref_id": "b37", "title": "Feature pyramid networks for object detection", "year": "2017" }, { "authors": "Lin ", "journal": "", "ref_id": "b38", "title": "Focal loss for dense object detection", "year": "2017" }, { "authors": "Lin ", "journal": "", "ref_id": "b39", "title": "", "year": "2023" }, { "authors": "Xian Lin; Yangyang Xiang; Li Zhang; Xin Yang; Zengqiang Yan; Li Yu", "journal": "", "ref_id": "b40", "title": "Samus: Adapting segment anything model for clinically-friendly and generalizable ultrasound image segmentation", "year": "2023" }, { "authors": " Lou", "journal": "", "ref_id": "b41", "title": "", "year": "2023" }, { "authors": "Wei Lou; Xiang Wan; Guanbin Li; Xiaoying Lou; Chenghang Li; Feng Gao; Haofeng Li", "journal": "", "ref_id": "b42", "title": "Structure embedded nucleus classification for histopathology images", "year": "2023" }, { "authors": "Wang ; Ma; Jun Ma; Bo Wang", "journal": "", "ref_id": "b43", "title": "Segment anything in medical images", "year": "2023" }, { "authors": " Milletari", "journal": "", "ref_id": "b44", "title": "", "year": "2016" }, { "authors": "Fausto Milletari; Nassir Navab; Seyed-Ahmad Ahmadi", "journal": "Ieee", "ref_id": "b45", "title": "V-net: Fully convolutional neural networks for volumetric medical image segmentation", "year": "2016" }, { "authors": " Naylor", "journal": "", "ref_id": "b46", "title": "", "year": "2018" }, { "authors": "Peter Naylor; Marick Laé; Fabien Reyal; Thomas Walter", "journal": "IEEE transactions on medical imaging", "ref_id": "b47", "title": "Segmentation of nuclei in histopathology images by deep regression of the distance map", "year": "2018" }, { "authors": " Qu", "journal": "", "ref_id": "b48", "title": "", "year": "2019" }, { "authors": "Hui Qu; Zhennan Yan; Gregory M Riedlinger; Subhajyoti De; Dimitris N Metaxas", "journal": "Springer", "ref_id": "b49", "title": "Improving nuclei/gland instance segmentation in histopathology images by full resolution neural network and spatial constrained loss", "year": "2019" }, { "authors": " Raza", "journal": "", "ref_id": "b50", "title": "", "year": "2019" }, { "authors": "Shan E ; Ahmed Raza; Linda Cheung; Muhammad Shaban; Simon Graham; David Epstein; Stella Pelengaris; Michael Khan; Nasir M Rajpoot", "journal": "Medical image analysis", "ref_id": "b51", "title": "Micro-net: A unified model for segmentation of various objects in microscopy images", "year": "2019" }, { "authors": " Ronneberger", "journal": "Springer", "ref_id": "b52", "title": "U-net: Convolutional networks for biomedical image segmentation", "year": "2015-10-05" }, { "authors": " Schmidt", "journal": "", "ref_id": "b53", "title": "", "year": "2018" }, { "authors": "Uwe Schmidt; Martin Weigert; Coleman Broaddus; Gene Myers", "journal": "Springer", "ref_id": "b54", "title": "Cell detection with starconvex polygons", "year": "2018" }, { "authors": " Song", "journal": "", "ref_id": "b55", "title": "", "year": "2021" }, { "authors": "Qingyu Song; Changan Wang; Zhengkai Jiang; Yabiao Wang; Ying Tai; Chengjie Wang; Jilin Li; Feiyue Huang; Yang Wu", "journal": "", "ref_id": "b56", "title": "Rethinking counting and localization in crowds: A purely point-based framework", "year": "2021" }, { "authors": " Su", "journal": "", "ref_id": "b57", "title": "", "year": "2021" }, { "authors": "Jianlin Su; Yu Lu; Shengfeng Pan; Ahmed Murtadha; Bo Wen; Yunfeng Liu", "journal": "", "ref_id": "b58", "title": "Roformer: Enhanced transformer with rotary position embedding", "year": "2021" }, { "authors": " Vu", "journal": "", "ref_id": "b59", "title": "", "year": "2019" }, { "authors": "Quoc Dang Vu; Simon Graham; Tahsin Kurc; Minh Nguyen Nhat; Muhammad To; Talha Shaban; Qaiser; Alemi Navid; Koohbanani; Ali Syed; Jayashree Khurram; Tianhao Kalpathy-Cramer; Zhao", "journal": "Frontiers in bioengineering and biotechnology", "ref_id": "b60", "title": "Methods for segmentation and classification of digital microscopy tissue images", "year": "2019" }, { "authors": " Wang", "journal": "", "ref_id": "b61", "title": "", "year": "2023" }, { "authors": "Haoyu Wang; Sizheng Guo; Jin Ye; Zhongying Deng; Junlong Cheng; Tianbin Li; Jianpin Chen; Yanzhou Su; Ziyan Huang; Yiqing Shen", "journal": "", "ref_id": "b62", "title": "Sammed3d", "year": "2023" }, { "authors": " Wu", "journal": "", "ref_id": "b63", "title": "", "year": "2023" }, { "authors": "Junde Wu; Rao Fu; Huihui Fang; Yuanpei Liu; Zhaowei Wang; Yanwu Xu; Yueming Jin; Tal Arbel", "journal": "", "ref_id": "b64", "title": "Medical sam adapter: Adapting segment anything model for medical image segmentation", "year": "2023" }, { "authors": " Xu", "journal": "", "ref_id": "b65", "title": "", "year": "2023" }, { "authors": "Qing Xu; Wenwei Kuang; Zeyu Zhang; Xueyao Bao; Haoran Chen; Wenting Duan", "journal": "Springer", "ref_id": "b66", "title": "Sppnet: A single-point prompt network for nuclei image segmentation", "year": "2023" }, { "authors": " Yao", "journal": "", "ref_id": "b67", "title": "", "year": "2023" }, { "authors": "Kai Yao; Kaizhu Huang; Jie Sun; Amir Hussain", "journal": "IEEE Transactions on Emerging Topics in Computational Intelligence", "ref_id": "b68", "title": "Pointnu-net: Keypoint-assisted convolutional neural network for simultaneous multi-tissue histology nuclei segmentation and classification", "year": "2023" }, { "authors": "Liu Zhang; Kaidong Zhang; Dong Liu", "journal": "", "ref_id": "b69", "title": "Customized segment anything model for medical image segmentation", "year": "2023" }, { "authors": " Zhang", "journal": "", "ref_id": "b70", "title": "", "year": "2023" }, { "authors": "Chaoning Zhang; Dongshen Han; Yu Qiao; Jung Uk Kim; Sung-Ho Bae; Seungkyu Lee; Choong Seon; Hong ", "journal": "", "ref_id": "b71", "title": "Faster segment anything: Towards lightweight sam for mobile applications", "year": "2023" }, { "authors": " Zhao", "journal": "", "ref_id": "b72", "title": "", "year": "2020" }, { "authors": "Bingchao Zhao; Xin Chen; Zhi Li; Zhiwen Yu; Su Yao; Lixu Yan; Yuqian Wang; Zaiyi Liu; Changhong Liang; Chu Han", "journal": "Medical Image Analysis", "ref_id": "b73", "title": "Triple u-net: Hematoxylin-aware nuclei segmentation with progressive dense feature aggregation", "year": "2020" }, { "authors": " Zhou", "journal": "", "ref_id": "b74", "title": "", "year": "2019" }, { "authors": "Yanning Zhou; Omer Fahri Onder; Qi Dou; Efstratios Tsougenis; Chen Hao; Pheng-Ann Heng", "journal": "Springer", "ref_id": "b75", "title": "Cia-net: Robust nuclei instance segmentation with contour-aware information aggregation", "year": "2019" } ]
[ { "formula_coordinates": [ 3, 348.75, 530.61, 209.25, 13.26 ], "formula_id": "formula_0", "formula_text": "Õz = M (F (x) , P ({p z }) , [mask], [IoU])(1)" }, { "formula_coordinates": [ 3, 322.35, 603.98, 235.65, 13.26 ], "formula_id": "formula_1", "formula_text": "L sam = ωFL ( Õz , O z ) + DL ( Õz , O z ) + MSE (ν, ν) (2)" }, { "formula_coordinates": [ 4, 191.46, 160.27, 52.2, 13.29 ], "formula_id": "formula_2", "formula_text": "B = {b i } N i=1" }, { "formula_coordinates": [ 4, 115.86, 449.77, 181.15, 10.77 ], "formula_id": "formula_3", "formula_text": "w i,j = q i (c j ) -α||â i -b j || 2 (3)" }, { "formula_coordinates": [ 4, 65.71, 586.43, 231.29, 65.38 ], "formula_id": "formula_4", "formula_text": "L cls = - 1 M ⎛ ⎜ ⎝ N ∑ i=1 log q σ(i) (c i ) + β ∑ a i ∈A ′ log q i (∅) ⎞ ⎟ ⎠ L reg = γ N N ∑ i=1 ||â σ(i) -b i || 2 (4)" }, { "formula_coordinates": [ 4, 401.51, 402.16, 156.49, 12.62 ], "formula_id": "formula_5", "formula_text": "L aux = FL ( Ŝ, S)(5)" }, { "formula_coordinates": [ 4, 378.01, 464.62, 179.99, 10.1 ], "formula_id": "formula_6", "formula_text": "L prompter = L reg + L cls + L aux (6)" }, { "formula_coordinates": [ 5, 61.28, 270.64, 235.73, 24.77 ], "formula_id": "formula_7", "formula_text": "Õz = M (F (x) , P ({p z } ∪ {n z,k } K k=1 ) , [mask], [IoU])(7)" } ]
2023-11-27
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b5", "b9", "b1", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b9", "b17", "b18" ], "table_ref": [], "text": "Functional status refers to the level of activities an individual performs in their environment to meet basic needs and fulfill expected roles in daily life [1]. It is increasingly recognized as an important health indicator in addition to mortality and morbidity [2,3]. Since function is not well perceived in medical coding, most functioning information is hidden in free-text clinical notes. However, Natural Language Processing (NLP) research on the secondary use of EHRs has focused primarily on health conditions (ie, diseases, disorders) and related drugs [4]. Automatically extracting and coding functioning information from clinical text is still a relatively new and developing field in the NLP community, and there is a critical need to develop resources and methods to advance research in this area.\nFunction is a broad ontology defined by the International Classification of Functioning, Disability, and Health (ICF) [5] -a classification system developed by the World Health Organization (WHO) with the aim of standardizing the description of health and health-related states. Previous studies [6][7][8][9][10] have focused mainly on the Mobility domain of the ICF due to its well-defined and observable nature as a construct of human functioning. Thieu et al [6,10] constructed a private dataset from 1,554 physical therapy (PT) notes provided by the National Institutes of Health (NIH) Biomedical Translational Research Information System and deduced a fine-grained hierarchy between nested mobility-related entities (Figure 1): Mobility is a self-contained description of physical functional status, Action captures the activity, Assistance includes information about supporting devices or persons, Quantification details measurement values, and Score Definition provides standardized assessments, often as numerical values. They developed named-entity recognition (NER) models on this dataset and achieved 84.90% average F1 score. However, there exist limitations: (1) the unavailability of the private corpus hinders research collaboration with the public community; and (2) it is unknown how well the models perform beyond NIH data with different institutional language Figure 1: An example with nested annotation for Mobility, Action, Assistance and Quantification. idiosyncrasies.\nTo address these limitations, we explore the publicly available National NLP Clinical Datasets (n2c2) [11], which is contributed by Partners Healthcare consisting of 15 hospitals and healthcare institutes, ensuring robustness to institutional language idiosyncrasies. Unfortunately, the n2c2 data lacks mobility annotations, making them unsuitable for supervised entity recognition methods. Furthermore, manually annotating mobility-related entities by domain experts is costly at scale. Deep active learning algorithms were designed to mitigate this problem by strategically choosing the examples to annotate, aiming to obtain better downstream models with fewer annotations [12][13][14][15].\nIn this work, we employ deep active learning to create a public mobility entity dataset and develop NER models with n2c2 data. We use pool-based [16] query-by-committee sampling [17] weighted by density representativeness [18] to select the most informative sentences for human annotation. Our committee models, based on previous study [10], include BERT [19] and CRF [20].\nOur contributions can be summarized as follows:\n• We create the first publicly available mobility NER dataset for the research community to extract and analyze mobility information in clinical notes.\n• We provide the baseline evaluation results on our dataset using a variety of state-of-the-art NER approaches." }, { "figure_ref": [], "heading": "Background Functional status information (FSI)", "publication_ref": [ "b19", "b20", "b21", "b22", "b19", "b23", "b19", "b24", "b21", "b25" ], "table_ref": [], "text": "Due to the lack of a standardized functioning ontology [21] and the incompleteness of the ICF as a vocabulary source [22], previous studies rely on clinical staff to collect function phrases through focus groups [23,24] or manual chart reviews [21,25]. Kuang et al [21] manually gathered patient-reported function terms from clinical documents and online forums, facing challenges in matching them with Unified Medical Language System terms. Additionally, there were few attempts to automatically identify FSI from clinical notes. Those methods were limited to specific ICF codes [26] or relied on adhoc mapping tables [23] to alleviate the absence of a repository containing function-related concepts.\nNewman-Griffis et al [27] emphasized the importance of capturing FSI in healthcare systems and called for more research in this area." }, { "figure_ref": [], "heading": "Mobility domain within ICF", "publication_ref": [ "b5", "b9", "b6", "b7", "b8", "b26" ], "table_ref": [], "text": "Thieu et al [6] took the initial step towards extracting FSI from clinical notes by systematically identifying Mobility-related FSI. They first created a dataset of 250 de-identified PT notes, including details about the activity being performed, sources of assistance required, and any measurements described in the notes. Expanding to 400 PT notes [10], they achieved high performance in Mobility NER using an ensemble of CRF, RNN, and BERT models, showing the efficacy of their approach with sufficient resources. Other attempts on this dataset explored domain adaptation of mobility embeddings [7], action polarity classification [8] linking action to ICF codes [9]. However, this dataset is private and thus restricted to only a handful of NIH researchers. Recently, Zirikly et al [28] introduced publicly available dictionaries of terms related to mobility, self-care, and domestic life to facilitate the retrieval and extraction of disability-relevant information. These terms were curated from NIH and Social Security Administration documents, and their performance on other institutional data remains untested." }, { "figure_ref": [ "fig_0" ], "heading": "MATERIALS AND METHODS", "publication_ref": [], "table_ref": [], "text": "In this section, we present our deep active learning framework (Figure 2) for incrementally developing Mobility NER models together with gold-standard annotated datasets using the n2c2 research dataset. We detail the pre-processing, data retrieval, annotation, and active learning strategy." }, { "figure_ref": [], "heading": "Data Collection", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Data source selection", "publication_ref": [ "b27", "b28" ], "table_ref": [ "tab_0" ], "text": "Two most well-known dataset that provide clinical notes for research purposes are MIMIC [29,30] and the National NLP Clinical Challenges (n2c2) [11]. Although the MIMIC dataset has driven large amount of research in clinical informatics, its limited scope -only including data from patients in critical care units at one institution -makes it less suitable for addressing the institutional language idiosyncrasies problem. In contrast, the n2c2 dataset offers greater diversity in language idiosyncrasies with data from 15 hospitals and healthcare institutes. Additionally, n2c2 2018 dataset contains 505 discharge summaries from MIMIC-III. In this study, we utilize the n2c2 research datasets, which comprise unstructured clinical notes from the Research Patient Data Registry at Partners Healthcare, originally created for the i2b2 annual shared-task challenge projects from 2006 (Table 1). We obtained a total of 6,614 text notes by downloading all available datasets from the DBMI Data Portal 1 ." }, { "figure_ref": [], "heading": "Pre-processing and Deduplication", "publication_ref": [ "b5", "b9", "b29" ], "table_ref": [], "text": "Our work utilizes n2c2 notes, primarily discharge summaries, where we observe a sparser occurrence of mobility information compared to NIH PT notes used in previous study [6,10]. We decide to perform sentence-level annotations instead of note-level. This approach allows us to algorithmically filter more sentences containing mobility information for human annotation, reducing human efforts from scanning the large volume of irrelevant text. We first employ the Stanza Python NLP Library [31],\nknown for its strong performance in biomedical and clinical NLP tasks, particularly using the \"mimic\" model trained on MIMIC-III dataset, for sentence segmentation. We then remove duplicate sentences from reused notes across challenges, reducing the total sentence count from 564,707 to 271,827." }, { "figure_ref": [], "heading": "Downsizing the pool of relevant, unlabeled sentences", "publication_ref": [ "b30", "b31" ], "table_ref": [], "text": "As active learning requires re-scoring the entire pool of unlabeled sentences at each iteration, and our empirical observation suggests that the pool is sparse with mobility information, we choose to further downsize it. This reduces computational intensity during weekly re-scoring (see Section ) and enhances relevance to mobility information. Specifically, remaining n2c2 sentences after deduplication are indexed into Lucene [32], a high-performance text search engine. We define mobility-relevant keywords by extracting terms from the domain and subdomain definitions (including inclusions) under the \"d4 Mobility\" section of the ICF framework 2 . Next, we filter out NLTK stop words [33] and irrelevant words, and then expand the set with inflections 3 to create the first keyword set\nK = {k 1 , k 2 , .., k n }.\nRetrieved sentences are obtained from Lucence using query:\nk 1 OR k 2 OR ... OR k n .\nSince short definitions from ICF do not include all possible mobility-relevant keywords, we improve sentence retrieval recall through an iterative keyword expansion process. In each iteration, we rank content words in retrieved sentences by frequency and manually add high-frequency mobility-relevant keywords not included in the previous iteration. For example, \"gait\" and \"adls\" (activities of daily " }, { "figure_ref": [], "heading": "Manual Annotation", "publication_ref": [ "b9", "b32" ], "table_ref": [], "text": "The conventional active learning procedure starts with a small seed set of data, which is annotated to build initial mobility recognition models. In our study, we reuse parts of the annotation guidelines [10], taking portions related to the five entity types: Mobility, Action, Assistance, Quantification, and Score Definition. A domain expert then manually selects and annotates 20 sentences, ensuring that each one contains at least one Mobility entity.\nIn each iteration, we start by employing latest BERT model, trained on updated data from previous iterations, to pre-tag the present batch of unlabeled sentences selected through active learning.\nThis pre-tagging step reduces manual annotation time, since annotators only correct existing tags rather than starting from scratch. To ensure accuracy, two human annotators follow a two-phase process. The first phase, called Blind Annotation, involves each annotator referring to annotation guidelines to correct the machine pre-tagging errors. In the second phase, called Gold Standard Annotation, the annotators collaboratively resolve any discrepancies and achieve a consistent set of corrections. These additional gold standard labels obtained through the annotation process are then used to retrain the mobility NER model.\nWe implement each iteration within the time frame of one week. During the week, two medical students annotate a new batch of 125 sentences, with 100 added to the training set, and 25 to the validation set. The weekends will be allocated to training new mobility recognition models and rescoring all remaining sentences in the unlabeled pool. Newly selected sentences from the unlabeled pool will be ready for the next week. All annotation activities are completed on the Inception platform [34]. " }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [ "b33", "b33", "b16" ], "table_ref": [], "text": "At each iteration, we apply active learning to select the most informative sentences for human annotation. We use a straight-forward pool-based query-by-committee sampling strategy [35]. A group of models, known as a \"committee\", evaluates the unlabeled pool and selects sentences on which they have the highest disagreement. Let x = [x 1 , ..., x T ] represents a sequence of length T with a corresponding label sequence y = [y 1 , ..., y T ]. NER models are trained to assign tags to each token in the input sequence x, indicating whether the token belongs to a particular type of mobility entities.\nWe use vote entropy [35] as the base informativeness score:\nϕ V E (x) = - 1 T T t=1 m∈M V (y t , m) C log V (y t , m) C\nwhere C is the number of committee models, M is a list that contains all possible label tags, and V (y t , m) is the number of \"votes\" or the level of agreement between committee members on assigning the tag m to the token t.\nTo further improve the representativeness of selected sentences, we implement the information density metric proposed by Settles et al [18]. They defined the density score of a sentence as its average similarity to all other sentences in the unlabeled pool. The information density score is calculated as a product of the base informativeness score and the density score controlled by a parameter β:\nϕ ID (x) = ϕ V E (x) × ( 1 |L| |L| l=1 sim(x, x (l) )) β\nTo fit the limitation of our human annotator resource, we select top 125 sentences with highest information density scores for human labeling at the next active learning iteration." }, { "figure_ref": [], "heading": "Named Entity Recognition Modeling", "publication_ref": [ "b9", "b34" ], "table_ref": [], "text": "We formulate the task of identifying mobility entities as a Nested NER problem since Action, Assistance and Quantification entities are encapsulated within the span of the Mobility entity. Thieu et al [10] proposed to use joined entity approach [36] to deal with nested entities, creating more complex tags by concatenating BIO (Beginning, Inside, and Outside) tags at all levels of nesting. While their approach has demonstrated good performance, it may not be suitable for our low-resource dataset.\nCreating more complex tags, e.g B-Action I-Mobility, leads to sparsity in less frequent tags, making it challenging for NER models to learn and correctly identify these rare tags during training and inference. Therefore, we keep our active learning pipeline simple by training a separate model for each entity type using the BIO format where M = [O, B-entity, I-entity]." }, { "figure_ref": [], "heading": "Model Choices", "publication_ref": [ "b9", "b17", "b18", "b35", "b36", "b18" ], "table_ref": [], "text": "Previous results [10] showed that BERT model [19] had the highest recall and lowest precision while CRF model [20] had the lowest recall and highest precision for identifying mobility entities. This observation inspires us to use the proportion of prediction disagreement between these two models (C = 2) for active learning. The selected models are:\n• A BERT classifier built by adding a linear classifier on top of an original BERT model and supervised trained with a cross-entropy objective for sequence labeling. We initialize our model using Bio+Discharge Summary BERT [37], a model fine-tuned from BioBERT [38] using only MIMIC-III discharge summaries. We use AdamW optimizer to update model parameters with a batch size of 32 and a learning rate of 5e-6. Training runs on an NVIDIA A6000 GPU for 100 epochs with early stopping of 30.\n• A Stanford CRF-NER classifier based on the CRFClassifier package [20]. The package enables feature extractors for NER and implementation of a linear chain Conditional Random Field (CRF). We train the model by modifying the configuration file to point at our labeled data." }, { "figure_ref": [ "fig_2" ], "heading": "Disagreement Signal", "publication_ref": [ "b6" ], "table_ref": [], "text": "From a theoretical perspective, Action is the central component of mobility information. A relevant sentence should contain at least one Action or Mobility entity while unnecessarily containing any Assistance or Quantification entity. As such, it is theoretically appropriate to compute the disagreement score based on Action NER models. From an empirical perspective, Action entities are shorter and easier to identify, while Mobility entities tend to encompass entire clauses or sentences [7], making them more challenging for sequence labeling models in a low-resource setting. We further empirically disregard disagreement signal from Assistance and Quantification because of their trivial predictive accuracy in the initial dataset. Specifically, our initial NER dataset contains 27 Assistance, 33 Quantification and no Score Definition entities. The numbers became even smaller after splitting the dataset into train and validation sets, making it insufficient to train NER models. Initial evaluation shows zero F1 scores on BERT models trained for these two entity types (Figure 3). Based on both theoretical and empirical observations, we choose to only rely on Action NER models for computing the disagreement score in active learning." }, { "figure_ref": [], "heading": "Information Density Score", "publication_ref": [ "b37" ], "table_ref": [], "text": "The density score requires pairwise similarity calculations between sentences in the unlabeled pool.\nWe use Sentence Transformers [39] loaded with Bio+Discharge Summary BERT weights to encode each sentence into an embedding vector. These vectors are used to compute cosine similarity scores between sentences. Information density metric, however, is computational demanding such that the number of required vector similarity calculations grows quadratically with the number of sentences in the unlabeled pool. For efficiency, we pre-compute density scores for all sentences offline only once and store these results for quick lookup during the active learning process. Finally, we set the controlled parameter β to 1." }, { "figure_ref": [], "heading": "Benchmarking", "publication_ref": [ "b38", "b17", "b35", "b39", "b40", "b41", "b42", "b43", "b39", "b44", "b17", "b35", "b40", "b9" ], "table_ref": [], "text": "We adopt five-fold cross-validation in all experiments on our final gold standard dataset. Specifically, we use StratifiedKFold function from scikit-learn library [40] to create balanced folds, ensuring that each fold contains a similar number of instances for each entity type. The final F 1 score is the average of the five-fold scores.\nFirst, we train a separate model for each entity type using BIO format as mentioned in and compare the performance using three pretrained language models: BERT base , BERT large [19] and Bio+Discharge Summary BERT [37]. Considering the nesting structure of the entity types, we further apply two state-of-the-art nested NER methods: Pyramid [41] and BINDER [42], which have demonstrated superiority on well-known datasets such as ACE04 [43], ACE05 [44], and NNE [45].\n• Pyramid [41] is a layered neural architecture for nested NER that incorporates L flat NER layers stacked in a pyramid shape. Token or text segment embeddings are recursively fed from bottom to top. The architecture utilizes both direct and inverse pyramids, enabling each decoding layer to consider global information from both the lower and upper layers. Following Pyramid's best performance settings, we obtain token embeddings by concatenating the encoded embeddings from ALBERT xxlarge-v2 [46] with either BERT base,large [19] or Bio+Discharge Summary BERT [37].\n• BINDER [42] is a bi-encoder framework for NER that leverages contrastive learning to maximize the similarity between the vector representations of entity mentions and their corresponding types. We use their best reported hyperparameter setup and copy the entity type description from the annotation guidelines [10]. For example, Assistance entity is described as to perform an activity\". " }, { "figure_ref": [], "heading": "RESULTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Active Learning", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Gold Standard Dataset", "publication_ref": [ "b9", "b35" ], "table_ref": [ "tab_2", "tab_3" ], "text": "After repeating the active learning cycles for 9 months, we obtain a dataset comprised of 4,265 sentences that includes 11,784 entities (Table 2). There are two main differences in the distribution of entities between our data set and the closest work, the NIH private dataset [10]. First, we do not detect any instance of the Score Definition entity. It seems Score Definition is specific to physical therapy notes at the NIH, and thus is not observed in discharge summaries. As a result, all of our model training and evaluation exclude this entity from consideration. Second, we observe significantly smaller number of Assistance and Quantification entities compared to Action and Mobility entities in our dataset. This disparity poses challenges for training joint decoding NER models due to class imbalance and low-resource sample.\nFor quality assurance, we measure inter-annotator agreement (IAA) of entity mention spans using F 1 scores. Table 3 reports IAAs between two annotators and between each annotator versus gold standard adjudication using exact matching and partial matching. The average exact matching scores between the two annotators for all entity types are 0.72, indicating a moderate level of agreement in identifying the exact boundaries of the entities. Furthermore, the average gap of 19% between exact matching and partial matching across the two annotators is relatively large. It suggests that while identifying the presence of a mobility-relevant entity in a sentence is easy, accurately determining its span boundary is more challenging. It is also noted that leveraging a language model pretrained on the same data domain (i.e. discharge summaries) has shown to be beneficial. Models that report the highest F1 scores for each entity type are the ones fine-tuned or utilize embeddings derived from Discharge Summary BERT [37]. However, NER performance does not improve when using a larger pretrained model such as BERT-large." }, { "figure_ref": [], "heading": "DISCUSSION Comparison to Related Works", "publication_ref": [ "b9", "b26", "b26" ], "table_ref": [], "text": "Disregarding the difference in data distribution and model architecture, entity recognition performance on our dataset is slightly lower than on the NIH private dataset [10], with a 2-4% performance gap for Action and Mobility entities, and over 8% for Assistance and Quantification entities. These gaps can be explained by three main reasons. First, our dataset is more challenging due to the diversity of language use in the n2c2 research dataset, which includes clinical notes from 15 different hospitals and healthcare institutes. In contrast, the NIH private dataset only contains physical therapy notes collected at the NIH. Second, the NIH private dataset is annotated by senior experts, whereas our dataset is annotated by medical students, including one master's and one PhD student.\nThe discrepancy in experience and expertise might contribute to a lower IAA in our annotation.\nLastly, our dataset is largely imbalanced with low-resource Assistance and Quantification entities, leading to challenges in training and evaluating NER models for these entities.\nWe also scan our dataset for overlap of mobility terms compared to a dictionary recently published by Zirikly et al [28]. Our dataset includes 3,525 sentences that each contains at least one Mobility entity. Scanning these sentences against 2,413 mobility terms provided in the NIH dictionary, we found 907 sentences that do not contain any NIH mobility term. For example, a phrase \"able to salute and brush teeth with either hand\" is annotated in our dataset with ICF codes d440 -Fine hand use and d445 -Hand and arm use. However, the NIH dictionary [28] only considers \"brush teeth\" in a self-care context with ICF code d520 -Caring for body parts, thus missing its mobility context. Another example is that a keyword search using the NIH mobility terms will miss the sentence \"She was able to go two flights without extreme difficulty\" because the generic verb \"go\" is not included." }, { "figure_ref": [], "heading": "Future Direction", "publication_ref": [ "b45", "b46", "b47" ], "table_ref": [], "text": "We plan to apply in-context learning on pretrained large language models (LLMs) [47,48] to address the low-resource entity types in our dataset. Although powerful, LLMs (including ChatGPT) face challenges in determining the boundary characters of entities, and also struggle to adhere to the instruction not to rephrase the extracted entity text [49]. These limitations highlight areas where further improvement is needed to enhance in-context learning for low-resource entity types." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this study, we annotate the first publicly available dataset to train and evaluate NER models that extract ICF's Mobility-related information from clinical notes. We also benchmark popular and cutting-edge NER methods on the dataset. We hope that releasing the dataset to the research community will accelerate the development of methodologies to identify the complex spectrum of information about whole-person functioning in EHRs." }, { "figure_ref": [], "heading": "ACKNOWLEDGEMENTS", "publication_ref": [], "table_ref": [], "text": "We would like to thank Suhao Chen (SC) for pre-processing n2c2 research datasets and Thanh Duong (TD) for installing the Inception annotation platform." }, { "figure_ref": [], "heading": "DATA AVAILABILITY", "publication_ref": [], "table_ref": [], "text": "The n2c2 research datasets are available at (https://portal.dbmi.hms.harvard.edu/projects/ n2c2-nlp/) to researchers who signed NLP Research Purpose and Data Use Agreement form. The Mobility annotation will be released on our research group website, or via n2c2's Community Annotations Downloads section." }, { "figure_ref": [], "heading": "FUNDING", "publication_ref": [], "table_ref": [], "text": "This study is funded by grant #HR21-173 through the Oklahoma Center for the Advancement of Science and Technology." }, { "figure_ref": [], "heading": "AUTHOR CONTRIBUTIONS", "publication_ref": [], "table_ref": [], "text": "TDL implemented the active learning pipeline, conducted experiments, and wrote the manuscript. SA and BS annotated the n2c2 datasets. ZM and TT trained annotators and monitored the annotation process. TT designed system architecture, supervised project, and revised the manuscript. All authors approved the submitted version." }, { "figure_ref": [], "heading": "CONFLICT OF INTEREST STATEMENT", "publication_ref": [], "table_ref": [], "text": "The authors do not have conflicts of interest related to this study." } ]
Objective Function is increasingly recognized as an important indicator of whole-person health, although it receives little attention in clinical natural language processing research. We introduce the first public annotated dataset specifically on the Mobility domain of the International Classification of Functioning, Disability and Health (ICF), aiming to facilitate automatic extraction and analysis of functioning information from free-text clinical notes.We utilize the National NLP Clinical Challenges (n2c2) research dataset to construct a pool of candidate sentences using keyword expansion. Our active learning approach, using query-by-committee sampling weighted by density representativeness, selects informative sentences for human annotation. We train BERT and CRF models, and use predictions from these models to guide the selection of new sentences for subsequent annotation iterations. Results Our final dataset consists of 4,265 sentences with a total of 11,784 entities, including 5,511 Action entities, 5,328 Mobility entities, 306 Assistance entities, and 639 Quantification entities. The inter-annotator agreement (IAA), averaged over all entity types, is 0.72 for exact matching and 0.91 for partial matching. We also train and evaluate common BERT models and state-of-the-art Nested NER models. The best F1 scores are 0.84 for Action, 0.7 for Mobility, 0.62 for Assistance, and 0.71 for Quantification. Conclusion Empirical results demonstrate promising potential of NER models to accurately extract mobility functioning information from clinical text. The public availability of our annotated dataset will facilitate further research to comprehensively capture functioning information in electronic health records (EHRs).
Leveraging deep active learning to identify low-resource mobility functioning information in public clinical notes
[ { "figure_caption": "Figure 2 :2Figure 2: Our iterative deep active-transfer learning", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Algorithm 2 :2Our active learning procedure. Input: Unlabeled data pool U Output: Final labeled dataset L and trained models M 1 , M 2 L, U ← start(U) /* Manually select starting set */ M 1 , M 2 ← train(L) /* Train initial models */ while not stop criterion() do I ← query(M 1 , M 2 , U) /* Sample new batch of unlabeled data */ I ′ ← annotate(I) /* Annotate the new batch */ U ← U -I; L ← L ∪ I ′ /* Update unlabeled pool and gold standard dataset */ M 1 , M 2 ← train(L) /* Train models with updated gold standard dataset */ end Active Learning", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Results of weekly BERT model for each entity type.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 33Figure 3 shows the performance of the BERT model for each entity type on the weekly validation set, which is regularly updated during the active learning process. Adding weekly curated data into the existing gold standard dataset results in fluctuation of the F1 score due to the change in the data distribution and the introduction of additional noise. It turns out only Action model maintains stable improvement throughout the active learning process. Performance fluctuates greater on entity types with a small number of instances such as Assistance and Quantification.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "The National NLP Clinical Challenges (n2c2)'s Research Datasets.1 Only part of the original 2010 data is available for research beyond the original challenge.3 The dataset used in Track 1 of the 2018 n2c2 shared task consisted of longitudinal records from 288 patients, drawn from the 2014 i2b2/UTHealth shared task corpus.", "figure_data": "Year Dataset NameSize2006 Deidentification & Smoking889 discharge summaries2008 Obesity1237 discharge summaries2009 Medication1243 discharge summaries2010 Relations1748 discharge summaries and progress reports 22011 Coreference978 discharge summaries, progress notes, clinical re-ports, pathology reports, discharge records, radiologyreports, surgical pathology reports, and other reports2012 Temporal Relations310 discharge summaries2014 Deidentification & Heart Disease1304 longitudinal medical records2018 Track 1: Clinical Trial Cohort Se-1304 discharge summaries 3lection2018 Track 2: Adverse Drug Events and505 discharge summariesMedication Extraction1 2016, 2019 and 2022 dataset is not available for download on DBMI Data Portal.", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "living) are absent in the ICF descriptions but were added to the keyword set through our iterative procedure. After five manual iterations, we obtain a set of 200 keywords, including inflections. Using this final keyword set on the 271,827 unique sentences above, we narrow down to 22,894 mobilityrelevant sentences as our unlabeled data pool for subsequent active learning.", "figure_data": "Algorithm 1: Iterative sentence retrieval and keyword expansion.Input: All n2c2 datasets N , ICF Mobility descriptions DOutput: Unlabeled data pool UK ← init keyword set(D)/* Create initial keyword set */while not stop criterion() doU ← lucene search(N , K)/* Retrieve new sentences */K ← update keyword set(K, U)/* Find new keywords */end", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Number of entity mentions in our public dataset and NIH private dataset. Note: Act = Action, Mob = Mobility, Ast = Assistance, Quant = Quantification, ScDf = Score Definition", "figure_data": "Dataset PublicSourceAct MobAst Quant ScDf TotalOurYes4,265 n2c2 sentences 5,511 5,328 306639011,784NIH [10]No400 NIH PT notes 4,527 4,631 2,517 2,303303 14,281", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Inter-annotation agreement between 2 annotators (A and B) and Gold Standard dataset.", "figure_data": "Note: E = exact matching, P = partial matchingActMobAstQuantEPEPEPEPA vs B0.8 0.92 0.73 0.93 0.7 0.91 0.65 0.89A vs Gold standard 0.9 0.95 0.87 0.97 0.88 0.96 0.79 0.94B vs Gold standard 0.87 0.95 0.78 0.96 0.73 0.92 0.74 0.94", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Average F1 score for five-fold cross-validation experiments", "figure_data": "MethodPretrained weights/embeddingsActMobAst QuantBERT base82.67 65.35 58.36 68.67BERT [19]BERT large83.08 67.17 60.47 67.62Discharge Summary BERT82.71 66.19 61.7 70.58ALBERT xxlarge-v2 + BERT base83.51 69.46 61.38 68.52Pyramid [41]ALBERT xxlarge-v2 + BERT large83.56 68.986166.59ALBERT xxlarge-v2 + Discharge Summary BERT 83.94 70.01 59.53 69.09BERT base82.94 67.84 58.42 66.1BINDER [42]BERT large83.14 67.52 60.467.2Discharge Summary BERT82.56 68.42 56.44 66.18NER Benchmark", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "presents the F1 score for each entity type, averaged across five-fold cross validation. The best performing models achieve F1 scores of 0.84 for Action, 0.7 for Mobility, 0.62 for Assistance, and 0.71 for Quantification. Model performance is consistent with training sample size, that is, achieving high accuracy with Action entities while struggling with the data sparsity of Assistance and Quantification entities. In addition, lengthy spans and token length variability in Mobility entities are barriers to accurate exact identification. Surprisingly, training separate BERT models[19] for low-resource entity types such as Assistance and Quantification yield better performance than training a single nested NER model for all entity types. A possible reason is that we used the micro F1 score as the stopping criterion for training the nested NER model. This approach favors the dominant entity types in an imbalanced dataset, which, in turn, compromises the accuracy of rare entity types. As a result, Pyramid[41] achieves the best accuracy on Action and Mobility entities with more training data. However, BINDER[42] does not perform well on our dataset.", "figure_data": "", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" } ]
Tuan-Dung Le; Zhuqi Miao; Samuel Alvarado; Brittany Smith; William Paiva; Thanh Thieu
[ { "authors": "Kevin P High; Susan Zieman; Jerry Gurwitz; Carl Hill; Jennifer Lai; Thomas Robinson; Mara Schonberg; Heather Whitson", "journal": "Journal of the American Geriatrics Society", "ref_id": "b0", "title": "Use of functional assessment to define therapeutic goals and treatment", "year": "2019" }, { "authors": "Gerold Stucki; Jerome Bickenbach", "journal": "European journal of physical and rehabilitation medicine", "ref_id": "b1", "title": "Functioning: the third health indicator in the health system and the key indicator for rehabilitation", "year": "2017" }, { "authors": "Maren Hopfe; Birgit Prodinger; Jerome E Bickenbach; Gerold Stucki", "journal": "Disability and rehabilitation", "ref_id": "b2", "title": "Optimizing health system response to patient's needs: an argument for the importance of functioning information", "year": "2018" }, { "authors": "Yanshan Wang; Liwei Wang; Majid Rastegar-Mojarad; Sungrim Moon; Feichen Shen; Naveed Afzal; Sijia Liu; Yuqun Zeng; Saeed Mehrabi; Sunghwan Sohn", "journal": "Journal of biomedical informatics", "ref_id": "b3", "title": "Clinical information extraction applications: a literature review", "year": "2018" }, { "authors": " ", "journal": "World health organization", "ref_id": "b4", "title": "International classification of functioning, disability, and health : Icf", "year": "2001" }, { "authors": "Thanh Thieu; Jonathan Camacho; Pei-Shu Ho; Julia Porcino; Min Ding; Lisa Nelson; Elizabeth Rasch; Chunxiao Zhou; Leighton Chan; Diane Brandt", "journal": "IEEE", "ref_id": "b5", "title": "Inductive identification of functional status information and establishing a gold standard corpus: A case study on the mobility domain", "year": "2017" }, { "authors": "Denis Newman-Griffis; Ayah Zirikly", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Embedding transfer for low-resource medical named entity recognition: A case study on patient mobility", "year": "2018-07" }, { "authors": "Denis Newman-Griffis; Ayah Zirikly; Guy Divita; Bart Desmet", "journal": "", "ref_id": "b7", "title": "Classifying the reported ability in clinical mobility descriptions", "year": "2019" }, { "authors": "Denis Newman-Griffis; Jonathan Camacho Maldonado; Pei-Shu Ho; Maryanne Sacco; Rafael Jimenez Silva; Julia Porcino; Leighton Chan", "journal": "Frontiers in rehabilitation sciences", "ref_id": "b8", "title": "Linking free text documentation of functioning and disability to the icf with natural language processing", "year": "2021" }, { "authors": "Thanh Thieu; Jonathan Camacho Maldonado; Pei-Shu Ho; Min Ding; Alex Marr; Diane Brandt; Denis Newman-Griffis; Ayah Zirikly; Leighton Chan; Elizabeth Rasch", "journal": "International journal of medical informatics", "ref_id": "b9", "title": "A comprehensive study of mobility functioning information in clinical notes: entity hierarchy, corpus annotation, and sequence labeling", "year": "2021" }, { "authors": "Yanyao Shen; Hyokun Yun; Zachary C Lipton; Yakov Kronrod; Animashree Anandkumar", "journal": "", "ref_id": "b10", "title": "Deep active learning for named entity recognition", "year": "2017" }, { "authors": "Ramon Maldonado; Sanda M Travis R Goodwin; Harabagiu", "journal": "", "ref_id": "b11", "title": "Active deep learning-based annotation of electroencephalography reports for cohort identification", "year": "2017" }, { "authors": "Weixin Liang; James Zou; Zhou Yu", "journal": "", "ref_id": "b12", "title": "Alice: Active learning with contrastive natural language explanations", "year": "2020" }, { "authors": "Artem Shelmanov; Dmitri Puzyrev; Lyubov Kupriyanova; Denis Belyakov; Daniil Larionov; Nikita Khromov; Olga Kozlova; Ekaterina Artemova; V Dmitry; Alexander Dylov; Panchenko", "journal": "", "ref_id": "b13", "title": "Active learning for sequence tagging with deep pre-trained models and bayesian uncertainty estimates", "year": "2021" }, { "authors": "Lewis David", "journal": "ACM", "ref_id": "b14", "title": "A sequential algorithm for training text classifiers: Corrigendum and additional data", "year": "1995" }, { "authors": "Sebastian Seung; Manfred Opper; Haim Sompolinsky", "journal": "", "ref_id": "b15", "title": "Query by committee", "year": "1992" }, { "authors": "Burr Settles; Mark Craven", "journal": "", "ref_id": "b16", "title": "An analysis of active learning strategies for sequence labeling tasks", "year": "2008" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b17", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Jenny Rose Finkel; Trond Grenager; Christopher Manning", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Incorporating non-local information into information extraction systems by Gibbs sampling", "year": "2005-06" }, { "authors": "Jinqiu Kuang; April F Mohanty; Charlene R Vh Rashmi; Bruce E Weir; Qing Bray; Zeng-Treitler", "journal": "American Medical Informatics Association", "ref_id": "b19", "title": "Representation of functional status concepts from clinical documents and social media sources by standard terminologies", "year": "2015" }, { "authors": "Csongor I Samson W Tu; Tania Nyulas; Mark A Tudorache; Musen", "journal": "American Medical Informatics Association", "ref_id": "b20", "title": "A method to compare icf and snomed ct for coverage of us social security administration's disability listing criteria", "year": "2015" }, { "authors": "Rehab Mahmoud; Nashwa El-Bendary; Hoda Mo Mokhtar; Aboul Ella Hassanien", "journal": "IEEE", "ref_id": "b21", "title": "Icf based automation system for spinal cord injuries rehabilitation", "year": "2014" }, { "authors": " Jeffrey L Greenwald; Victoria Patrick R Cronin; Goodarz Carballo; Garry Danaei; Choy", "journal": "Medical care", "ref_id": "b22", "title": "A novel model for predicting rehospitalization risk incorporating physical function, cognitive status, and psychosocial support using natural language processing", "year": "2017" }, { "authors": "Elizabeth A Steven J Skube; Elliot G Lindemann; Mari Arsoniadis; Elizabeth C Akre; Genevieve B Wick; Melton", "journal": "", "ref_id": "b23", "title": "Characterizing functional health status of surgical patients in clinical notes", "year": "2018" }, { "authors": "Rita Kukafka; Michael E Bales; Ann Burkhardt; Carol Friedman", "journal": "Journal of the American Medical Informatics Association", "ref_id": "b24", "title": "Human and automated coding of rehabilitation discharge summaries according to the international classification of functioning, disability, and health", "year": "2006" }, { "authors": "Denis Newman-Griffis; Julia Porcino; Ayah Zirikly; Thanh Thieu; Jonathan Camacho Maldonado; Pei-Shu Ho; Min Ding; Leighton Chan; Elizabeth Rasch", "journal": "BMC Public Health", "ref_id": "b25", "title": "Broadening horizons: the case for capturing function and the role of health informatics in its use", "year": "2019" }, { "authors": "Ayah Zirikly; Bart Desmet; Julia Porcino; Jonathan Camacho Maldonado; Pei-Shu Ho; Rafael Jimenez Silva; Maryanne Sacco", "journal": "European Language Resources Association", "ref_id": "b26", "title": "A whole-person function dictionary for the mobility, selfcare and domestic life domains: a seedset expansion approach", "year": "2022-06" }, { "authors": "E W Alistair; Tom J Johnson; Lu Pollard; Li-Wei H Shen; Mengling Lehman; Mohammad Feng; Benjamin Ghassemi; Peter Moody; Leo Szolovits; Roger G Anthony Celi; Mark", "journal": "Scientific data", "ref_id": "b27", "title": "Mimic-iii, a freely accessible critical care database", "year": "2016" }, { "authors": "Alistair Johnson; Lucas Bulgarelli; Tom Pollard; Steven Horng; Leo Anthony Celi; Roger Mark", "journal": "PhysioNet", "ref_id": "b28", "title": "Mimic-iv", "year": "2020" }, { "authors": "Yuhao Zhang; Yuhui Zhang; Peng Qi; Christopher D Manning; Curtis P Langlotz", "journal": "", "ref_id": "b29", "title": "Biomedical and clinical english model packages in the stanza python nlp library", "year": "2020" }, { "authors": "Andrzej Bia Lecki; Rob Muir; Grant Ingersoll", "journal": "", "ref_id": "b30", "title": "Apache lucene 4", "year": "2012" }, { "authors": "Steven Bird; Edward Loper", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "NLTK: The natural language toolkit", "year": "2004-07" }, { "authors": "Jan-Christoph Klie; Michael Bugert; Beto Boullosa; Richard Eckart De Castilho; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "The inception platform: Machine-assisted and knowledge-oriented interactive annotation", "year": "2018-06" }, { "authors": "Ido Dagan; Sean P Engelson", "journal": "Elsevier", "ref_id": "b33", "title": "Committee-based sampling for training probabilistic classifiers", "year": "1995" }, { "authors": "Beatrice Alex; Barry Haddow; Claire Grover", "journal": "", "ref_id": "b34", "title": "Recognising nested named entities in biomedical text", "year": "2007" }, { "authors": "Emily Alsentzer; John R Murphy; Willie Boag; Wei-Hung Weng; Di Jin; Tristan Naumann; Matthew Mcdermott", "journal": "", "ref_id": "b35", "title": "Publicly available clinical bert embeddings", "year": "2019" }, { "authors": "Jinhyuk Lee; Wonjin Yoon; Sungdong Kim; Donghyeon Kim; Sunkyu Kim; Chan Ho; So ; Jaewoo Kang", "journal": "Bioinformatics", "ref_id": "b36", "title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining", "year": "2020" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "", "ref_id": "b37", "title": "Sentence-bert: Sentence embeddings using siamese bertnetworks", "year": "2019" }, { "authors": "Fabian Pedregosa; Gaël Varoquaux; Alexandre Gramfort; Vincent Michel; Bertrand Thirion; Olivier Grisel; Mathieu Blondel; Peter Prettenhofer; Ron Weiss; Vincent Dubourg", "journal": "the Journal of machine Learning research", "ref_id": "b38", "title": "Scikitlearn: Machine learning in python", "year": "2011" }, { "authors": "Jue Wang; Lidan Shou; Ke Chen; Gang Chen", "journal": "", "ref_id": "b39", "title": "Pyramid: A layered model for nested named entity recognition", "year": "2020" }, { "authors": "Sheng Zhang; Hao Cheng; Jianfeng Gao; Hoifung Poon", "journal": "", "ref_id": "b40", "title": "Optimizing bi-encoder for named entity recognition via contrastive learning", "year": "2022" }, { "authors": "Alexis George R Doddington; Mark A Mitchell; Lance A Przybocki; Stephanie M Ramshaw; Ralph M Strassel; Weischedel", "journal": "Lrec", "ref_id": "b41", "title": "The automatic content extraction (ace) program-tasks, data, and evaluation", "year": "2004" }, { "authors": "Christopher Walker; Stephanie Strassel; Julie Medero; Kazuaki Maeda", "journal": "", "ref_id": "b42", "title": "Ace 2005 multilingual training corpus", "year": "2006" }, { "authors": "Nicky Ringland; Xiang Dai; Ben Hachey; Sarvnaz Karimi; Cecile Paris; James R Curran", "journal": "", "ref_id": "b43", "title": "Nne: A dataset for nested named entity recognition in english newswire", "year": "2019" }, { "authors": "Zhenzhong Lan; Mingda Chen; Sebastian Goodman; Kevin Gimpel; Piyush Sharma; Radu Soricut", "journal": "", "ref_id": "b44", "title": "Albert: A lite bert for self-supervised learning of language representations", "year": "2019" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b45", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann", "journal": "", "ref_id": "b46", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Yan Hu; Iqra Ameer; Xu Zuo; Xueqing Peng; Yujia Zhou; Zehan Li; Yiming Li; Jianfu Li; Xiaoqian Jiang; Hua Xu", "journal": "", "ref_id": "b47", "title": "Zero-shot clinical entity recognition using chatgpt", "year": "2023" } ]
[ { "formula_coordinates": [ 7, 42.52, 562.59, 100.13, 11.5 ], "formula_id": "formula_0", "formula_text": "K = {k 1 , k 2 , .., k n }." }, { "formula_coordinates": [ 7, 42.52, 562.59, 512.79, 35.41 ], "formula_id": "formula_1", "formula_text": "k 1 OR k 2 OR ... OR k n ." }, { "formula_coordinates": [ 9, 186.16, 622.99, 224.31, 32.03 ], "formula_id": "formula_2", "formula_text": "ϕ V E (x) = - 1 T T t=1 m∈M V (y t , m) C log V (y t , m) C" }, { "formula_coordinates": [ 10, 194.32, 190.8, 208.28, 33.22 ], "formula_id": "formula_3", "formula_text": "ϕ ID (x) = ϕ V E (x) × ( 1 |L| |L| l=1 sim(x, x (l) )) β" } ]
2023-11-27
[ { "figure_ref": [ "fig_1" ], "heading": "Introduction", "publication_ref": [ "b56", "b2", "b19", "b0", "b26", "b0", "b26", "b2", "b19", "b40", "b19", "b23", "b44", "b2", "b19", "b60", "b4", "b19", "b40", "b19", "b51", "b19", "b39" ], "table_ref": [], "text": "The semantic segmentation networks, e.g., Transformers [56] and Convolutional Neural Networks [7], learned from data with a closed-set of known classes have shown outstanding performance. However, they often suffer performance degradation when encountering novel objects or classes in new dynamic environments [3,19,52]. To improve their performance, several transfer learning and do- main adaptation methods [1,26,52] were introduced to adapt trained models into deployed environments. While the former often aims to fine-tune the model on labeled data collected in the new environments, the latter adapts the model to the new domains in an unsupervised manner [1,26]. However, these methods cannot handle novel objects well due to their close-set learning. In practice, the semantic segmentation models should be able to adaptively and continually learn the new knowledge of novel classes. It motivates the development of Continual Learning paradigm [3,19,40], a.k.a, Continual Semantic Segmentation (CSS), where the segmentation models are learned sequentially to new contents of data.\nFar apart from prior segmentation methods [7, 56] that learn one time on static, closed-set data, Continual Learning requires the segmentation models to learn from dynamic, open-set data [19,52]. In addition, in particular scenarios, accessing previous learning data is restricted due to privacy concerns. In CSS, three challenges have been identified, including (1) Catastrophic Forgetting, (2) Background Shift, and (3) Fairness. While the catastrophic forgetting problem [23,44,50] depicts the segmentation model tends to forget its knowledge when learning new data, background shift indicates the problem of classes of previous or future data (unknown classes) have collapsed into a background class [3,19,60]. Prior methods [5,19,40,52] addressed these two problems by introducing knowledge distillation and pseudo labels. However, these methods can not handle unknown classes since they either consider these unknown classes as a background class or assign unknown pixels by a pseudo label of prior known classes [19,52]. More importantly, the last problem, fairness, is a significant challenge that limits the performance of CSS models.\nAs shown in Fig. 2, the number of pixels of each class in training data have been imbalanced among classes and significantly decreased after each task. Thus, this bias influences the learning procedure and model predictions that later cause unfair predictions among classes. However, limited studies are taking the fairness problem into account. [51] presented a similar problem in domain adaptation and extended it to continual learning [52]. These methods rely on the assumption of ideal fair or balanced data distributions. However, it is not applicable in practice since the size, i.e., the number of pixels, of several classes can never be more significant than others. For example, the size of the bottle should not be more significant than the size of a car. Meanwhile, the current knowledge distillation methods [4,19,39] in CSS are unable to handle the fairness problem since they focus on modeling catastrophic forgetting and background shift problems. Therefore, it is essential to develop a new CSS approach to address these limitations." }, { "figure_ref": [ "fig_0" ], "heading": "Contributions of This Work: This work presents a novel", "publication_ref": [], "table_ref": [], "text": "Fairness Learning via Contrastive Attention Approach (FALCON) to Continual Semantic Segmentation (as shown in Fig. 1). First, we introduce a novel Contrastive Clustering Paradigm approach to Continual Learning that models the catastrophic forgetting problem. Second, by analyzing the limitation of vanilla Contrastive Clustering in biased data, we introduce a novel Fairness Contrastive Clustering loss to model the fairness problem in continual learning efficiently. Third, to effectively model the background shift problem, we introduce a new Attention-based Visual Grammar that model the topological structures of feature distribution to handle the unknown classes effectively. Finally, the ablation studies illustrate the effectiveness of the proposed approach in different aspects of fairness promotion in CSS models. Compared with prior methods, our approach achieves state-of-the-art performance on different settings of three standard benchmarks of CSS, including ADE20K, Pascal VOC, and Cityscapes." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b19", "b2", "b60", "b40", "b4", "b39", "b13", "b45", "b46", "b19", "b27", "b9", "b10", "b36", "b36", "b9", "b25", "b10", "b16", "b41", "b29", "b41", "b43", "b55", "b31", "b14", "b51", "b51" ], "table_ref": [], "text": "Continual Semantic Segmentation Several studies have been introduced to address catastrophic forgetting and background shift problems in CSS [18-20, 28, 32, 34, 37, 38]. The common approach of CSS adopts knowledge distillation [19] and pseudo labels [3] to model catastrophic forgetting and background shift, respectively. Later, it is further improved by decoupling knowledge representations [60], modeling the inter-and intra-class knowledge [40], distinguishing the feature representations of the future classes [5,39], reducing background confusion [59], or modeling distillation loss via the geodesic flow [48]. Another approach [4] adopted the mask-based segmentation networks [12,13] to improve the performance of CSS models. Recent studies have introduced CSS under the unsupervised domain adaptation settings [45,46,54]. However, prior studies have yet well modeled the unknown classes in the open-world environments. Particularly, as previous methods [4,19] use the pseudo labels to model the unknown classes, the future classes will be treated as a background class. Then, Joseph et al. [27] improves the unknown class modeling by using clustering but this method considers all different unknown classes as a single cluster, leading to the non-discriminative features among unknown classes. Contrastive Learning is a common learning approach [9,10,36] to structure the deep feature representations in the deep latent space. Oorde et al. [36] first introduced the Noise-Contrastive Estimation (InfoNCE) learning framework. Then, Chen et al. [9] presented SimCLR, a selfsupervised contrastive learning approach to improve the representation power of Residual Networks. He [25] proposed a Momentum Contrast framework for unsupervised representation learning. Later, it was further improved by using MLP projection head [10] and extended to improve the self-supervised training process of vision transformers [11]. Cui et al. [16] introduced a supervised parametric contrastive learning loss to address the long-tailed recognition. Li et al. [30] adopted contrastive learning to develop the one-stage online contrastive clustering method. Radford et al. [41] presents a contrastive framework to learn the vision-language model. Later, several methods also adopted this framework to vision-language pretraining [29,41]. Imbalanced and Fairness Learning The early methods utilized the balanced Softmax loss [43] to alleviate the impact of imbalanced data distribution. Later, Wang et al. [55] introduced a Seesaw loss to re-balance the contributions of positive and negative instances via the mitigation and compensation modules. Ziwei et al. [31] introduced a dynamic meta-embedding to model the imbalanced classification problem. Chu et al. [14] reduce the bias in the segmentation model by presenting a new stochastic training scheme. Szabo et al. [49] presented a tilted cross-entropy loss to promote class-relevant fairness. However, there are limited studies that address the fairness problem in CSS. Truong et al. [51] introduced a fairness domain adaptation approach to semantic segmentation and later extended it into continual learning setting [52]. However, these methods [51,52] rely on the assumption of ideal balanced data which could not be achieved by nature. To address the limitations in prior work, this paper will introduce a novel approach to effectively model the fairness problem and unknown classes in the continual learning setting." }, { "figure_ref": [], "heading": "The Proposed FALCON Approach", "publication_ref": [ "b0", "b19", "b2", "b19", "b47", "b60", "b19", "b39" ], "table_ref": [], "text": "CSS aims to learn a segmentation network F on sequence data D = {D 1 , ..., D T } where T is the number of learning steps. At learning step t, the model F encounters a dataset D t = {(x t , ŷt )} where x t ∈ R H×W ×3 is the image and y ∈ R H×W is a segmentation label of x t . The ground truths at learning step t only consist of current classes C t , while the class labels of the previous C 1...t-1 and future steps C t+1...T are collapsed into a background class. Formally, learning the CSS model at step t can be formed as Eqn. (1).\nθ * t = arg min θ t E x t ,ŷ t ∈D t L CE y t , ŷt + λ CL L CL F (x t )(1)\nwhere, y t = F (x t , θ t ), θ t is the parameter of F at current learning step t, L CE is the cross-entropy loss, λ CL is the balanced weight. and L CL is the CSS objective. At learning step t, the segmentation model F is required to be able to predict both previously learned classes C 1...t-1 and current new classes C t . Under this learning scenario, three challenges have been identified, i.e., Catastrophic Forgetting, Background Shift, and Fairness. Several prior methods were presented to model the two first issues in CSS using knowledge distillation [4,19]. The last issue has not been well addressed yet due to its challenges [52]. Prior methods [3,4,19,47,60] adopt knowledge distillation to design L CL . However, this method prevents the CSS model from diverging knowledge learned previously, therefore resulting in limiting the ability to adopt new knowledge [52]. In addition, these methods have not addressed fairness and background shift problems due to their dedicated design for maintaining knowledge via distillation [4,19,39]. Therefore, to address these problems, we introduce a novel Fairness Learning via Contrastive Attention Approach to CSS." }, { "figure_ref": [ "fig_1" ], "heading": "Continual Learning via Contrastive Clustering", "publication_ref": [ "b19", "b39", "b27", "b25", "b27" ], "table_ref": [], "text": "Apart from prior methods [4,19,39], our CSS is defined as Contrastive Clustering Learning. Given a set of centroid vectors\n{c i } N K +N U i=1\nwhere N K = |C 1..t | and N U is the number of known and unknown classes up to current learning tasks. Prior work [4,27,52] often defined the number of unknown classes as 1 where background classes are considered as a single unknown class. Formally, our Contrastive Clustering Learning for CSS can be defined as Eqn. (2).\nL CL F (x t ) = c i L Cont (F t , c i ) = c i h,w -ϕ(f t h,w , c i ) log exp(f t h,w × c i ) f ′ exp(f ′ × c i )(2)\nwhere F t ∈ R H×W ×D is the feature maps extracted from the input image x t by the segmentation network F , f t h,w ∈ R D is the feature at the pixel location (h, w) of features F t , f ′ means the summation over all feature representations f ′ ∈ R D , and ϕ : R D × R D → [0, 1] is the function that determines either f t h,w belongs to the cluster c i or not. By defining CSS as contrastive clustering learning, the knowledge of the segmentation model has been well maintained via the cluster vectors c. Then, minimizing Eqn. (2) will separate the representations of different classes while gathering the features of the same class into the same cluster. As the cluster vectors c of the old classes C 1..t-1 have been well learned to represent their knowledge, these vectors are frozen at learning step t to maintain the knowledge representations of previous classes to address the catastrophic forgetting problem. To effectively learn cluster vectors c, the cluster vector c will periodically updated after each M steps by the momentum update [25,27,52] based on the features f t h,w assigned to cluster c. However, there are two major problems in contrastive clustering learning. First, since the training data in CSS suffer the bias among classes as shown in Fig. 2, this bias will influence Eqn. (2) and cause the unfair predictions. Second, as the function ϕ requires the labels to determine the features belonging to clusters, it limits the ability to model the unknown classes where the labels are not available. Therefore, Secs. 3.2-3.3 will present a novel approach to tackle these problems." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "Fairness Contrastive Clustering Learning", "publication_ref": [ "b8", "b16", "b8", "b16" ], "table_ref": [], "text": "While the contrastive clustering learning defined in Eqn. (2) promotes the compact representations of features around their clusters, inspired by [8,16,62], we observe that the imbalanced class distribution will influence unfair behaviors among classes. In particular, for simplicity, we consider {f t i } L i=1 is the set of features that belong to the cluster c at learning step t (i.e., ϕ(f t i , c) = 1) and L is the number of features (in this case, L is the total number of pixels belong to the class of cluster c). Let us define the enforcement between the feature f t t and the cluster c as Hence, the lower the value of the enforcement ℓ i is, the more compact the representation of visual features and clusters is. Then, the contrastive clustering learning loss in Eqn.\nℓ i = exp(f t i ×c)\n(2) of the entire cluster c can be defined as Eqn. (3).\nLCont(; , c) = - L i=1 log exp(f t i × c) f ′ exp(f ′ × c) = - L i=1 log ℓi (3)\nProposition 1: If the contrastive clustering loss L Cont (; , c) achieves the optimal value, the enforcement ℓ i between the feature and the cluster will converge to\nℓ i = L -1 .\nProposition 1 has implied that the class with more samples will result in a lower value of the enforcement and produce a more compact representation while the class having fewer samples will be more scattered in the feature space due to the higher value of the enforcement. In particular, let L major and L minor be the number of samples of the major and minor class where L major >> L minor . Then, based on Proposition 1, the enforcement between features and the cluster of the major class will be significantly lower than the one of the minor class, i.e., L -1 major << L -1 minor . Therefore, a direct adoption of the contrastive clustering loss in Eqn. (2) will result in an unfair CSS model. In addition, for classes in the minority group, the weak enforcement results in the feature presentations of classes being far away from their clusters. Thus, the model will produce nondiscriminative features compared to the ones in the majority group. Moreover, if the loss is applied to the cases of unknown labels, these feature representations can be scattered in the latent space and pulled into the incorrect clusters due to weak enforcement between features and clusters (Fig. 4).\nTo address the unfair problem in contrastive clustering learning, inspired by [8,16,62], we introduce a scaling factor α and a learnable transition vector v for each cluster c (all clusters have the same value of α but different vector v). Our Fairness Contrastive Clustering Learning Loss for the entire cluster in Eqn. ( 3) can be re-formed as:\nL α Cont (; , c) = -α L i=1 log exp(f t i × c) f ′ exp(f ′ × c) -log exp(v × c) f ′ exp(f ′ × c)(4)\nIntuitively, the scaling factor α will help to re-scale the impact of the enforcement in learning, and the transitive vector v assists in translating the center cluster into the proper position of the latent space. This action promotes the compactness of clusters in the minority group. Proposition 2: If the fairness contrastive clustering loss L α Cont (; , c) achieves the optimal value, the enforcement ℓ i between the feature and the cluster will converge to ℓ i = (α -1 + L) -1 . Proofs of Propositions 1-2 are provided in the supplementary.\nUnder the Proposition 2, when the value of α is small, the divergence of the enforcement between major and minor classes will be smaller, i.e., ||(α\n-1 + L major ) -1 -(α -1 + L minor ) -1 || < ||L -1\nmajor -L -1 minor ||. Fig. 4 has illustrated the impact of fairness contrastive clustering loss. Therefore, our designed proposed fairness contrastive loss has effectively addressed the fairness issue in Eqn. (2). It should be noted that although the smaller α results in the fairer enforcement varied from major to minor classes. However, if the value of scaling factor α is too small, the contrastive clustering loss will rely more on the enforcement of the transitive vector v, and the distribution of features f t i around its cluster c will be scattered due the weak enforcement caused by small α. Therefore, the value of scaling factor α in practice should be carefully selected." }, { "figure_ref": [ "fig_5" ], "heading": "Open-world Unknown Class Modeling", "publication_ref": [ "b27", "b19", "b27", "b35", "b57" ], "table_ref": [], "text": "An ideal CSS approach must be able to model the unknown classes without supervision, especially in openworld environments [27,52]. Prior studies have adopted the pseudo-label strategies [4,19] based on the model predictions to assign labels for seen classes while unseen classes have been ignored, thus resulting in non-discriminative features. [27,52] improved the background modeling by using an additional prototypical representation for unknown classes. However, these approaches consider different unknown classes as one (i.e., N U = 1) resulting in nondistinguished representations of different unknown classes. Thus, modeling function ϕ in Eqn. (2) without supervision of different unknown classes (i.e., N U > 1) is challenging. Although modeling ϕ to determine the single feature f belonging to the cluster c is challenging, prior studies in clustering [35,57,58] have suggested that determine a set of features {f t i } M i=1 belonging to cluster c should be easier. This derives from the fact that even though the feature representations of different classes are different, the distributions of features around its cluster (termed as Visual Grammar) in the feature space should be similar among classes or clusters. As a result, by learning the distribution of features and their clusters, the model ϕ can determine whether a feature belongs to a cluster. Then, by learning the model ϕ on prior known clusters and features, the knowledge of ϕ can be adaptively applied to unknown clusters. Fig. 5 illustrates our visual grammar model of the cluster distributions." }, { "figure_ref": [], "heading": "Limitations of Prior Clustering Methods", "publication_ref": [ "b57", "b35", "b53", "b4", "b35", "b53", "b19", "b39", "b27", "b19", "b39", "b27" ], "table_ref": [], "text": "The traditional methods in clustering, e.g., KNN or density-based clustering [21], remain limited to noisy features leading to producing the incorrect cluster assignment. Meanwhile, the modern clustering methods, e.g., Graph Neural Networks (GNNs) [57,58], require a large memory to build the affinity graph for clusters. In addition, GNNs often learn the local structures of graphs (or clusters) and accumulate them via the aggregation layers. Hence, the global structures of the clusters, i.e., visual grammar, are not well modeled by GNNs [35]. Therefore, to address these limitations, we introduced a new Attention-based Visual Grammar approach to efficiently model the distribution of features and their clusters via the self-attention mechanism [53]. can be defined as Eqn. (5).\nmin Θ E c,{f c i } M i=1 [-log p(f c 1 , f c 2 , ..., f c M , c, Θ)] = min Θ E c,{f c i } M i=1 [-log p(∆ c 1 , ∆ c 2 , ..., ∆ c M , c, Θ)](5)\nwhere 5) defines the visual grammar of the cluster by modeling the feature distribution of f c i and its cluster center c. Let ϕ : R (M +1)×D → [0, 1] M be a function receiving a center c and a set of M features\n∆ c i = f c i -c. Eqn. (\n{f i } M i=1 (cos(f i , c) ≥ cos(f i+1 , c)) to determine whether f i belonging to c, i.e., u = ϕ(∆ 1 , ∆ 2 , ..., ∆ M , c) where ∆ i = f i -c, u = [u 1 ,\nu 2 , ..., u M ] and u i = 1 denotes f i belong to cluster c and vice versa. Hence, the visual grammar model in Eqn. ( 5) can be modeled by the network ϕ as follows with parameter Θ as follows:\nΘ * = arg min Θ E c,{f i } M i=1 [-log p(u|∆ 1 , ∆ 2 , ..., ∆ M , c, Θ)] (6)\nEqn. ( 6) aims to model the distribution of features around its cluster by learning the correlation of relatively topological structures ∆ i of features f i around cluster c. Then, based on knowledge of the cluster distribution, the model ϕ is able to determine whether a feature f i belongs to cluster c. Hence, it is essential that the model ϕ has the ability to exploit the correlation between features f i and cluster c to learn the topological structure of visual grammar. Therefore, we adopt the self-attention mechanism [35,53] to efficiently model these feature correlations. Particularly, the model ϕ is formed by L ϕ blocks of self-attention as follows:\nz 0 = LN([∆ 1 , . . . , ∆ M , c]) +β, a l = z l + MHSA(z)) z l+1 = a l + MLP(LN(a l )), u = Proj(z L ϕ )(7)\nwhere β is the positional embedding, LN is Layer Normalization, MHSA is multi-head self-attention, MLP is the multi-layer perception, and Proj is the linear projection. By using Transformers, the correlation of cluster distributions can be well modeled by the self-attention mechanism.\nCluster Assignment via Visual Grammar Instead of assigning the clusters based on the model prediction [4,19,39] or nearest cluster [27,52] that are less effective, the cluster assignment in our approach will be performed by the visual grammar model, i.e., the visual grammar model will consider the M closest features around cluster c to assign the cluster for these features. Then, the cluster assignments are used to compute our Fairness Contrastive Clustering loss. In addition, following common practices [4,19,39], we improve background shift modeling by using the cluster assignments of features as the pseudo labels of pixels.\nUnknown Cluster Initialization Prior work [27,52] initialized a single unknown cluster (N U = 1), thus resulting in producing non-discriminative class-wise features. However, there should be more than a single unknown cluster (N U > 1) to produce discriminative features for different unknown classes. Therefore, our approach first initializes a list of potential unknown clusters at each learning step via DB-SCAN [21] on the features of unknown classes extracted by the current CSS model. For the new known class C t , we initialize these clusters based on the mean of their feature representations. Meanwhile, the clusters of known classes learned in previous steps are maintained." }, { "figure_ref": [ "fig_2" ], "heading": "Continual Learning Procedure", "publication_ref": [ "b35", "b4", "b19", "b17", "b6", "b24", "b56", "b56", "b35", "b61", "b22", "b4", "b19" ], "table_ref": [ "tab_0", "tab_1", "tab_1", "tab_2" ], "text": "Fig. 3 illustrates the training procedure of our continual learning approach. At each learning step t, the CSS model F with θ t is trained with the Fairness Contrastive Clustering loss defined in Eqn. ( 4) and the previous visual grammar model ϕ with Θ t-1 . In addition, we introduce a cluster regularizer R C to avoid the clusters of different classes collapsing into a single cluster. Therefore, the entire CSS learning objective in our approach can be formed as:\narg min θ t E x t ,ŷ t L CE y t , ŷt + λ CL c i L α Cont F t , ci + λ C R C (c)(8)\nwhere R C (c) = ci,cj {max(0, 2∇ -||c i -c j ||)} 2 is the regularizer to avoid the cluster collapsing, λ C is the balanced weight, and ∇ is the margin between clusters.\nTraining Procedure of Visual Grammar Model At CSS learning step t, we adopt the visual grammar model trained on the previous learning step, i.e., ϕ with Θ t-1 , to perform the cluster assignment for the contrastive clustering loss defined in Eqn. (2). Then, the visual grammar model at learning step t, i.e., ϕ with Θ t , will be learned (initialized from Θ t-1 ) on the features extracted from the dataset and the set of known clusters c up to the current learning step. Following [35], we sample a center c from the known clusters and its M closest features to train the visual grammar model. Initial Visual Grammar Model At the first learning step t = 1, since no clusters have been learned at initial, the visual grammar model ϕ with Θ 0 is not available. However, as common practices in CSS [4,5,19], the segmentation model is typically trained from a pre-trained backbone on ImageNet [17]. As a result, the features extracted at the first learning step are characterized by the ImageNet features. Therefore, we adopt this philosophy to initialize our visual grammar model (ϕ with Θ 0 ) by pre-training the visual grammar model on the ImageNet dataset. Then, during CSS training, we will progressively train our visual grammar model at each learning step as aforementioned. we adopt DeepLab-V3 [6] with ResNet-101 [24] and Seg-Former [56] with MiT-B3 [56] in our experiments. For the Visual Grammar model, we adopt the design of [35] with L ϕ = 12 blocks of multi-head self-attention layers. The feature vectors from the last layer of the decoder are used for our L α Cont loss. The value α is set individually for each dataset, i.e., α = 5 × 10 -2 for ADE20K, α = 10 -2 for VOC for Cityscapes. The details of our hyper-parameters are provided in the supplementary. Evaluation Protocols: We evaluate models on three standard datasets of CSS, i.e., ADE20K [61], Pascal VOC [22], and Cityscapes [15]. Following common practices [4,5,52], our experiments are conducted on the overlapped CSS settings. In particular, on ADE20K, we use three different settings, i.e., ADE20K 100-50 (2 steps), ADE20K 100-10 (6 steps), and ADE20K 100-5 (11 steps). On Pascal VOC, we evaluate FALCON in three benchmarks, i.e., VOC 15-5 (2 steps), VOC 15-1 (6 steps), and VOC 10-1 (11 steps). On Cityscapes, we conduct domain incremental experiments with three settings, i.e., Cityscapes 11-5 (3 steps), Cityscapes 11-1 (11 steps), and Cityscapes 1-1 (21 steps). Following [4,19], the mean Intersection over Union (mIoU) metric is adopted in our comparison, including mIoU of the last learning step on initial classes, incremental classes, and all classes. In addition, to illustrate the fairness improvement, we report the mIoU of major and minor classes. 1 presents our results using DeepLab-V3 [7] with Resnet101 on ADE20K 100-50 and ADE20K 100-10 benchmarks. We evaluate the impact of the fairness contrastive clustering loss L α Cont by comparing it with the vanilla contrastive clustering loss L Cont . As shown in our results, the overall performance has been significantly improved to 37.9% and 36.4% on ADE20K 100-50 and ADE20K 100-10, respectively. In addition, the fairness of the model has been promoted since the mIoU performance of major and minor groups was enhanced. We also study the impact of network backbones and cluster margin ∇ in our supplementary. Effectiveness of Scaling Factor of Cluster Table 2 illustrates the experimental results of the impact of different scaling factor α on ADE20K 100-50 and Pascal VOC 15-5 benchmarks. As shown in Table 2, when the value of scaling factor α gradually decreases, the performance of our proposed approach is improved accordingly since the fairness contrastive loss in Eqn (4) tends to be more uniform across major and minor classes. However, when the scaling factor is too small (α = 0.005), the impact of the loss enforcement becomes weaker leading to the weaker enforcement of the fairness contrastive clustering, resulting in lower overall performance. In addition, we have observed that the higher the number of classes demands the higher the value of α since it will increase the compactness of more clusters. Effectiveness of Loss Contributions Table 3 illustrates the contributions of proposed learning objectives. For the model without using visual grammar, we only use a single unknown cluster (N U = 1) and adopt the nearest cluster strategies to assign clusters of unknown pixels. By using only cross-entropy loss, the mIoU performance remains low due to catastrophic forgetting and background shift problems. Meanwhile, with our fairness clustering loss L α Cont , visual grammar model ϕ, and the cluster regularizer R, the mIoU performance has been significantly improved to 37.9% and 36.4% on ADE20K 100-50 and ADE20K 100-10, respectively. Moreover, our FALCON has significantly promoted the fairness of segmentation models illustrated by the mIoU improvement of major and minor groups." }, { "figure_ref": [ "fig_6" ], "heading": "Effectiveness of Visual Grammar", "publication_ref": [ "b2", "b19" ], "table_ref": [], "text": "We evaluate FALCON under three settings, i.e., Nearest Cluster, Fixed ϕ pretrained ImageNet (without updating on each learning step), and Adaptive ϕ (with updating on each learning step). As in Table 4, the mIoU result using only the nearest cluster remains ineffective. Meanwhile, the adaptive visual grammar model updated at each learning step further boosts the mIoU performance and promotes fairness, i.e., increased by 4.5% and 4.9% on ADE20K 100-50 and ADE20K 100-10 compared to the nearest cluster approach. In addition, we study the impact of choosing the number of features M in the visual grammar model in our supplementary. Fig. 6 illustrates the feature distributions of unknown classes (future class). As a result, our FALCON approach is able to model features of unknown classes into different clusters and produce better and more compact clusters compared to the one without Fairness Learning via Contrastive Attention. Table 5. Comparison with Prior Methods on ADE20K Benchmarks (Note: The results of MiB [3], PLOP [19], and FairCL [52] using Transformer on ADE20K 100-5 were not reported in prior studies. The upper bound results are not trained with fairness objective)." }, { "figure_ref": [ "fig_7" ], "heading": "Network", "publication_ref": [ "b19" ], "table_ref": [ "tab_5", "tab_6" ], "text": "Method ADE20K 100-50 ADE20K 100-10 ADE20K 100-5 0-100 101-150 all avg 0-100 101-150 all avg 0-100 101-150 all avg DeepLab-V3 PLOP [19] 41.9 14.9 32.9 37. 4 5 presents our experimental results using DeepLab-V3 and Transformer networks compared to prior CSS methods. Overall, our proposed approach has achieved the SOTA performance compared to prior methods. In particular, by using DeepLab-V3, our approach has achieved SOTA performance, i.e., the mIoU results of 37.9% and +36.4% on ADE20K 100-50 and ADE20K 100-10 benchmarks, higher than prior FairCL [52]. Meanwhile, our approach using Transformer has outperformed the prior SOTA CoMFormer [4] model by +3.5%, +8.0%, and +2.6% on ADE20K 100-50, ADE20K 100-10, and ADE20K 100-5 respectively. In addition, our mIoU results on the initial classes remain competitive with the upper-bounded results because our method is able to well handle the fairness problem compared to the fully supervised learning approach.\nWe also report our results on the ADE20K 50-50 benchmark in the supplementary. As in Fig. 7, FALCON produces better segmentation maps compared to prior methods. Pascal VOC Table 6 presents the results of our FALCON on Pascal VOC benchmarks. Our proposed approach has consistently achieved the SOTA performance on three bench-marks. In particular, compared to the prior FairCL [52] approach, our methods using DeepLab-V3 have improved the mIoU performance up to 73.50%, 69.83%, and 62.41% on Pascal VOC 15-5, Pascal VOC 15-1, and Pascal VOC 10-1, respectively. Additionally, by using the better network backbone, i.e., Transformer, the performance of the segmentation model is also further improved. Our results have reduced the gap with the upper bound performance. Cityscapes Table 7 reports the performance of our approach using DeepLab-V3 compared to prior methods on three different settings of Cityscapes benchmarks, i.e., Cityscapes 11-5, Cityscapes 11-1, and Cityscapes 1-1. As shown in the experimental results, the performance of our methods has consistently outperformed prior FairCL [52] approach by +3.78%, +3.14%, and +6.02% on three benchmarks. Similar to our experiments on ADE20K and VOC, the better network brings higher results. These results have shown the effectiveness of FALCON on various benchmarks." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "This paper has presented a novel Fairness Learning via Contrastive Attention approach to CSS. In particular, the Then, the visual grammar model was presented to model the unknown classes. The experimental results on different benchmarks have shown the SOTA performance and fairness promotion of our proposed FALCON approach. Limitations Our study chose a set of learning hyperparameters to support our theoretical analysis. However, it potentially consists of several limitations related to choosing learning parameters and cluster initialization. The details of our limitations are discussed in the supplementary. These limitations will motivate future research to further improve our Fairness Learning via Contrastive Attention. ,c) achieve the optimal value, the enforcement ℓ i between the feature and the cluster will converges to ℓ i = L -1 . Proof: Let us consider the optimization of the Eqn. (4) in the paper as follows:\nmin - L i=1 log exp(f t i × c) f ′ exp(f ′ × c) = - L i=1 log ℓ i subject to L i=1 ℓ i = ℓ (9)\nwhere ℓ is the total enforcement between features f t i and cluster c. Then, the optimization of Eqn. (4) in the paper can be rewritten by using Lagrange multiplier as follows:\nL {ℓ i } L i=1 , λ = - L i=1 log ℓ i + λ( L i=1 ℓ i -ℓ)(10)\nwhere λ is the Lagrange multiplier. Then, the contrastive clustering loss in Eqn. (4) in the paper achieves minimum if and only if:\n∂L {ℓ i } L i=1 , λ ∂ℓ i = -ℓ -1 i + λ = 0 ∂L {ℓ i } L i=1 , λ ∂λ = L i=1 ℓ i -ℓ = 0 ⇒ L {ℓ i } L i=1 , λ = -L log ℓ L(11)\nAs the total enforcement between features and the cluster is normalized, i.e., ℓ ∈ [0..1], the contrastive clustering loss L {ℓ i } L i=1 , λ achieves minimum when log ℓ = 0 ⇒ ℓ = 1. Then, the enforcement between a single feature and the cluster will be equal to\nℓ i = ℓ L = L -1 ." }, { "figure_ref": [], "heading": "Proof of Proposition 2", "publication_ref": [ "b4", "b9", "b14" ], "table_ref": [], "text": "Proposition 2: If the fairness contrastive clustering loss L α Cont (; , c) achieve the optimal value, the enforcement ℓ i between the feature and the cluster will converges to ℓ i = (α -1 + L) -1 . Proof: We first define the the enforcement between transitive vector v and the cluster c as ℓ v = exp (v×c) f ′ exp(f ′ ×c) . Then, let us consider the optimization of Eqn. (5) in the paper as follows:\nmin - L i=1 α log ℓ i -log ℓ v subject to L i=1 ℓ i + ℓ v = ℓ (12)\nSimilar to Eqn. (9), Eqn. ( 12) can be reformulated via Lagrange multiplier as follows:\nL {ℓ i } L i=1 , λ = - L i=1 α log ℓ i -log ℓ v +λ( L i=1 ℓ i +ℓ v -ℓ)(13\n) Then, the fairness contrastive loss L α Cont achieves minimum if and only if:\n∂L {ℓ i } L i=1 , λ ∂ℓ i = -αℓ -1 i + λ = 0 ∂L {ℓ i } L i=1 , λ ∂ℓ v = -ℓ -1 v + λ = 0 ∂L {ℓ i } L i=1 , λ ∂λ = L i=1 ℓ i + ℓ v -ℓ = 0 ⇒ L {ℓ i } L i=1 , λ = -αL log αℓ 1 + αL -log ℓ 1 + αL(14)\nAs in Eqn. (14), the fairness contrastive learning loss L {ℓ i } L i=1 , λ archives minimum when log ℓ = 0 → ℓ = 1. Thus, the enforcement between the single feature the cluster will be re-balanced as ℓ i = α 1+αL = (α -1 + L) -1 ." }, { "figure_ref": [], "heading": "Implementation", "publication_ref": [ "b25", "b27", "b27", "b27" ], "table_ref": [], "text": "Implementation Our framework is implemented in Py-Torch and trained on four 40GB-VRAM NVIDIA A100 GPUs. The contrastive loss in our implementation is normalized with respect to the number of samples. These models are optimized by the SGD optimizer [2] with momentum 0.9, weight decay 10 -4 , and a batch size of 16. The learning rate of the first learning step and the continual steps is set to 10 -4 and 5 × 10 -5 respectively. To update the cluster vectors c, following prior work [25,27,52], we maintain a set of 500 features for each cluster and update the clusters after 100 steps with a momentum η = 0.99. In our domain incremental experiments, all clusters are updated at each learning step by momentum update. The number of features selected for each cluster in the visual grammar model is set to M = 128. The balanced weight of CSS objective λ CL and the cluster regularizer λ C is set to 1. Following the common practices [27,52], the margin between clusters ∇ is set to 10. Unknown Cluster Initialization As mentioned in the main paper, we adopt the DB-SCAN algorithm to initialize the clusters for unknown samples. In addition, to reduce the noise clusters and isolated clusters, we also merge several close clusters, i.e., if the distance between two clusters is less than the margin 2∇, these will be merged into a single cluster where the new cluster center will be the means of these two merging cluster centers. By empirical observation, we have noticed that the number of unknown clusters initialized at each learning step, i.e., N U at the current learning step t, is not greater than 1.5× times of the remaining classes (i.e., |C t+1..T |) in the dataset, e.g., in our ADE20K 100-50 experiments, at the first learning step of 100 classes, there are 68 unknown clusters that have been initialized while there are 50 remaining unknown classes in the dataset.\nCluster Assignment In our approach, we use our visual grammar model to assign the cluster for each feature representation. Theoretically, although there is a possibility that a feature could not be assigned to a cluster via the visual grammar model, we have empirically observed that this issue rarely happens in our approach. Indeed, since we initialize the known clusters via the DB-SCAN, it guarantees that for each feature, there is at least one cluster nearby that the feature representation should belong to. However, to preserve the integrity of our approach, for the outlier features in cases that cannot be assigned clusters via the visual grammar model, these outliers will be heuristically assigned to their closest clusters as similar to [27,52]. " }, { "figure_ref": [], "heading": "Additional Experimental Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experiment Results of ADE20K 50-50 Benchmark", "publication_ref": [], "table_ref": [ "tab_8" ], "text": "Table 8 presents the results of our method on the ADE20K 50-50 benchmark compared to prior methods. For fair comparisons, we use the DeepLab-V3 and Transformer in this experiment. As shown in the results, our proposed FALCON approach significantly outperforms prior methods. The results of our approach have reduced the gap with the upper bound result. " }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [ "b17" ], "table_ref": [ "tab_9", "tab_10", "tab_0" ], "text": "Effectiveness of Choosing Margin ∇ Table 9 studies the effectiveness of the value of margin ∇ to the performance of our approach on ADE20K 100-50 and ADE20K 100-10 benchmarks. As shown in the results, the change of ∇ also slightly influences the performance of the model. Since the margin defines the distance between two clusters, while the smaller value of the margin ∇ could cause the incorrect cluster assignment of the features, the larger value of the margin ∇ could produce the less compact clusters. 10, the optimal performance of our approach is M = 128. When the number of features selected is small (M = 96), it does not have enough number of features to form the visual grammar so the model is hard to exploit the correlation among features and the cluster. Meanwhile, when we increase the number of selected features (M = 256), the clusters will consist of many outlier features (the ones that do not belong to the cluster), thus being challenging for the visual grammar model to exploit the topological structures of the feature distribution. As shown in the performance, the more powerful the segmentation model is, the better performance of the model is.\nIn particular, our approach has shown its flexibility since it consistently improves the performance of the segmentation model and achieves the SOTA performance on two different benchmarks, i.e., the performance of Transformer models achieves 41.9%, and 40.3% on ADE20K 100-50, ADE20K 100-10, respectively.\nTable 11. Effectiveness of Different Backbones on ADE20K.\n(a) ADE20K 100-50 Backbone 0-100 101-150 all Major Minor where F t and F t-1 are the feature representations extracted from the model at learning step t and step t-1, respectively, and the metric L measure the knowledge gap between F t and F t-1 . Then, given a set of cluster c, we consider the following triangle inequality of the metric L as follows:\n∀c : L(F t , F t-1 ) ≤ L(F t , c) + L(c, F t-1 ) ⇔ L(F t , F t-1 )\nL distill ≤ 1 |C 1..T | c   L(F t , c) L Cont +L(c, F t-1 )   (16)\nAt the computational time of Contrastive Clustering loss, the set of cluster vectors c is fixed (could be considered as constants). In addition, the features extracted at learning step t -1, i.e., F t-1 , are constant due to the fix pre-trained model θ t-1 . Therefore, without a strict argument, the distance L(c, F t-1 ) could be considered as constant. Therefore, Eqn. ( 16) can be further derived as follows:\nL(F t , F t-1 )\nL distill = O L 1 |C 1..T | Constant c L(F t , c) L Cont + L(c, F t-1 ) Constant = O c L(F t , c) L Cont ⇒ L distill (F t-1 , F t ) = O L Cont (F t , c) (17\n)\nwhere O is the Big-O notation. Hence, from Eqn. (17), without lack of generality, we can observe that the Contrastive Clustering Loss is the upper bound of the Knowledge Distillation loss. Therefore, by minimizing the Contrastive Clustering Loss, the constraint of Knowledge Distillation is also maintained due to the property of the upper bound." }, { "figure_ref": [], "heading": "Discussion of Limitations", "publication_ref": [ "b56", "b13" ], "table_ref": [], "text": "In our paper, we choose a specific set of hyper-parameters and learning approaches to support our hypothesis. However, our work could contain several limitations. First, choosing the scaling factor α could be considered as one of the potential limitations of our approach. In practice, when data keeps continuously growing, the pre-defined scaling factor α could not be good enough to control the fairness among classes. Our work focuses on investigating the effectiveness of our proposed losses to fairness, catastrophic forgetting, and background shift problems. Thus, the investigation of balance weights among losses has not been fully exploited, and we leave this experiment as our future work. Third, initializing the unknown clusters at each training step could potentially be room for improvement since the bad initial clusters could result in difficulty during training and updating these clusters and linking the unknown clusters learned in previous steps and new initial unknown clusters at the current learning steps have been yet fully exploited in our method. In addition, while our approach is designed for the DeepLab-V3 and Transformer segmentation networks [7,56], the extensions of FALCON to mask-based segmentation networks [4,12,13] could be a potential next research for further performance improvement. These limitations could motivate new studies to further improve Fairness Learning via the Contrastive Attention Approach to continual learning in the future." } ]
Continual Learning in semantic scene segmentation aims to continually learn new unseen classes in dynamic environments while maintaining previously learned knowledge. Prior studies focused on modeling the catastrophic forgetting and background shift challenges in continual learning. However, fairness, another major challenge that causes unfair predictions leading to low performance among major and minor classes, still needs to be well addressed. In addition, prior methods have yet to model the unknown classes well, thus resulting in producing nondiscriminative features among unknown classes. This paper presents a novel Fairness Learning via Contrastive Attention Approach to continual learning in semantic scene understanding. In particular, we first introduce a new Fairness Contrastive Clustering loss to address the problems of catastrophic forgetting and fairness. Then, we propose an attention-based visual grammar approach to effectively model the background shift problem and unknown classes, producing better feature representations for different unknown classes. Through our experiments, our proposed approach achieves State-of-the-Art (SOTA) performance on different continual learning settings of three standard benchmarks, i.e., ADE20K, Cityscapes, and Pascal VOC. It promotes the fairness of the continual semantic segmentation model.
FALCON: Fairness Learning via Contrastive Attention Approach to Continual Semantic Scene Understanding in Open World
[ { "figure_caption": "Figure 1 .1Figure 1. Our Fairness Learning via Contrastive Attention to Continual Semantic Segmentation. The Fairness Contrastive Clustering Loss promotes the fairness of the model while the Attention-based Visual Grammar models the unknown classes.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. The Data Class Distribution of ADE20K. The major classes occupy more than 75% of the total pixels of the dataset.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. The Proposed Open-World Fairness Continual Learning Framework.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. The Enforcement Loss of Vanilla Contrastive Clustering LCont and Fairness Contrastive Clustering L α Cont on Pascal VOC. Since LCont suffers severe biased between major and minor classes, its clusters of minor classes remain scattered. Meanwhile, our L α Cont produces a more uniform loss among classes, thus promoting the fairness and compactness of clusters.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Remark 1 :1Given a center c and a set of M features {f c i } M i=1 where f c i denotes the feature f i belonging to the cluster c, and ∀i ∈ [1..M -1] : cos(f c i , c) ≥ cos(f c i+1 , c) the Visual Grammar of the cluster c parameterized by Θ", "figure_data": "", "figure_id": "fig_4", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. The Proposed Visual Grammar Model.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Cluster Distribution at Learning Step t = 1 of ADE20K 100-50 (classes 109-144 are future classes) without (left) and with (right) Fairness Learning via Contrastive Attention.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Visualization of Our Results on ADE20K 100-50.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Supplementary 1 .Proposition 1 :11Proof of Propositions 1 and 2 1.1. Proof of Proposition 1 If the contrastive clustering loss L Cont (;", "figure_data": "", "figure_id": "fig_8", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Effectiveness of Fairness Contrastive Learning Loss.", "figure_data": "(a) ADE20K 100-50L Cont L α Cont0-101 100-150allMajor Minor✓44.615.234.851.526.4✓44.624.537.952.130.8(b) ADE20K 100-10L Cont L α Cont0-101 100-150allMajor Minor✓41.916.033.249.924.9✓44.420.436.451.828.74. Experiments4.1. Implementations and Evaluation ProtocolsImplementation Following common practices [3, 19, 52],", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Effectiveness of Scaling Factor α in L α Cont .", "figure_data": "(a) ADE20K 100-50α = 0.143.119.8 35.350.627.7α = 0.0544.624.5 37.952.130.8α = 0.0143.621.3 36.251.028.7α = 0.005 42.418.6 34.550.126.6(b) Pascal VOC 15-5α0-15 16-20allMajor Minorα = 0.174.851.6 69.376.963.5α = 0.0576.251.3 70.379.063.8α = 0.0179.454.8 73.581.367.7α = 0.005 74.648.9 68.577.661.6", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Effectiveness of Contributions of Our Proposed Losses.", "figure_data": "(a) ADE20K 100-50L CE L α Contϕ R C 0-100 101-150allMajor Minor✓0.018.96.30.09.4✓✓44.07.931.951.622.1✓✓✓43.821.836.451.129.1✓✓✓✓44.624.537.952.130.8(b) ADE20K 100-10L CE L α Contϕ R C 0-100 101-150allMajor Minor✓0.03.51.20.01.8✓✓39.013.130.447.821.6✓✓✓43.418.535.151.227.1✓✓✓✓44.420.436.451.828.74.2. Ablation Study", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Effectiveness of Visual Grammar on ADE20K.", "figure_data": "(a) ADE20K 100-500-100 101-150allMajor MinorNearest Cluster44.311.533.451.524.3Fixed ϕ44.617.635.652.027.4Adaptive ϕ44.624.537.952.130.8(b) ADE20K 100-100-100 101-150allMajor MinorNearest Cluster40.114.331.548.722.9Fixed ϕ43.018.534.950.627.0Adaptive ϕ44.420.436.451.828.7", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Comparisons with Prior Methods on Pascal VOC. .04 75.69 78.71 47.54 71.28 74.92 52.54 64.26 Upper Bound 80.84 74.97 79.44 80.84 74.97 79.44 80.84 74.97 79.44", "figure_data": "MethodPascal VOC 15-5 0-15 16-20 allPascal VOC 15-1 0-15 16-20 allPascal VOC 10-1 0-10 11-20 allMiB [3]76.37 49.97 70.08 38.00 13.50 32.20 20.00 20.10 20.10DeepLab-V3PLOP [19] RCIL [60] FairCL [52] SSUL [5] FALCON75.73 51.71 70.09 65.10 21.10 54.60 44.00 15.50 30.50 ---70.60 23.70 59.40 55.40 15.10 34.30 ---72.00 22.70 60.30 42.30 25.60 34.40 77.82 50.10 71.22 77.31 36.59 67.61 71.31 45.98 59.25 79.35 54.77 73.50 78.34 42.57 69.83 73.94 49.73 62.41Upper Bound 79.77 72.35 77.43 79.77 72.35 77.43 78.41 76.35 77.43TransformerPLOP [19] SSUL [5] FairCL [52] FALCON72.51 48.37 66.76 64.59 37.23 58.08 48.53 33.71 41.47 79.91 56.83 74.41 79.91 40.56 70.54 74.06 51.85 63.48 ---73.50 22.80 61.50 57.10 14.20 36.60 81.20 58", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Comparisons on Cityscapes.", "figure_data": "Method11-5 11-11-1LWF-MC [42] 58.90 56.92 31.24DeepLab-V3ILT [33] M İB [3] PLOP [19] RCIL [60] FairCL [52]59.14 57.75 30.11 61.51 60.02 42.15 63.51 62.05 45.24 64.30 63.00 48.90 66.96 66.61 49.22FALCON70.74 69.75 55.24Upper Bound 79.30 79.30 79.30Trans.FairCL [52] FALCON Upper Bound 83.80 83.80 83.80 67.85 67.09 55.68 71.33 70.14 58.79", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Extract features on D t by F (; , θt-1) 2: Step 1: Initialize new known clusters for C t of features extracted in Step 0 3: Step 2: Initialize potential unknown clusters of features extracted in Step 0 4: Step 3: Train CSS Model F (; θt) on D t 5: Step 4: Extract features on D t by F (; , θt) 6: Step 5: Train Visual Grammar Model ϕ(; , Θt) on current known clusters c and features extracted in Step 4 7: return F (; θt) and ϕ(; Θt)", "figure_data": "Continual Learning Procedure Algorithm 1 illustrates thetraining procedure of our CSS approach.Algorithm 1 CSS Procedure At Learning Step t", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": ". Experimental results on ADE20K 50-50 BenchmarkADE20K 50-50 (3 steps)NetworkMethod0-50 50-150 allMiB [3]45.6 21.0 29.3PLOP [19]48.8 21.0 30.4LGKD+PLOP [59] 49.4 29.4 36.0DeepLab-V3 RCIL [60]47.8 23.0 31.2RCIL+LGKD [59] 49.1 27.2 34.4FairCL [52]49.7 26.8 34.6FALCON50.6 31.2 37.6Upper Bound51.1 33.25 38.9FairCL [52]49.6 27.8 35.6Transformer FALCON53.0 36.8 42.2Upper Bound54.9 40.8 45.5", "figure_id": "tab_8", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Effectiveness of Choosing Margin ∇ Effectiveness of Choosing Number of Features M We study the impact of choosing the number of features M in the visual grammar model. As in shown Table", "figure_data": "(a) ADE20K 100-500-100 101-150allMajor Minor∇ = 544.421.836.951.929.4∇ = 1044.624.537.952.130.8∇ = 2044.722.237.251.729.9(b) ADE20K 100-100-100 101-150allMajor Minor∇ = 543.218.735.050.527.3∇ = 1044.420.436.451.828.7∇ = 2043.519.935.751.227.9", "figure_id": "tab_9", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Effectiveness of Number of Features M in a Cluster of Visual Grammar Model.Effectiveness of Different Segmentation NetworksTo illustrate the flexibility of our proposed approach, we evaluate our proposed approach with different network backbones. Table11illustrates the results of our approach using DeepLab-V3 [7], SegFormer[56] with different backbones, i.e., ResNet-50, ResNet-101, MiT-B2, and MiT-B3.", "figure_data": "(a) ADE20K 100-500-100 101-150allMajor MinorM = 9643.019.635.250.527.5M = 12844.624.537.952.130.8M = 25643.621.636.351.028.9(b) ADE20K 100-100-100 101-150allMajor MinorM = 9642.216.433.650.225.3M = 12844.420.436.451.828.7M = 25642.717.134.250.626.0", "figure_id": "tab_10", "figure_label": "10", "figure_type": "table" } ]
Thanh-Dat Truong; Utsav Prabhu; Bhiksha Raj; Jackson Cothren; Khoa Luu
[ { "authors": "Nikita Araslanov; Stefan Roth", "journal": "", "ref_id": "b0", "title": "Self-supervised augmentation consistency for adapting semantic segmentation", "year": "2021" }, { "authors": "Léon Bottou", "journal": "", "ref_id": "b1", "title": "Large-scale machine learning with stochastic gradient descent", "year": "2010" }, { "authors": "Fabio Cermelli; Massimiliano Mancini; Samuel Rota Bulò; Elisa Ricci; Barbara Caputo", "journal": "", "ref_id": "b2", "title": "Modeling the background for incremental learning in semantic segmentation", "year": "2020" }, { "authors": "Fabio Cermelli; Matthieu Cord; Arthur Douillard", "journal": "IEEE/CVF Computer Vision and Pattern Recognition Conference", "ref_id": "b3", "title": "Comformer: Continual learning in semantic and panoptic segmentation", "year": "2023" }, { "authors": "Sungmin Cha; Youngjoon Yoo; Taesup Moon", "journal": "", "ref_id": "b4", "title": "Ssul: Semantic segmentation with unknown label for exemplar-based class-incremental learning", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b5", "title": "", "year": "2021" }, { "authors": "Liang-Chieh Chen; George Papandreou; Florian Schroff; Hartwig Adam", "journal": "", "ref_id": "b6", "title": "Rethinking atrous convolution for semantic image segmentation", "year": "2017" }, { "authors": "Liang-Chieh Chen; George Papandreou; Iasonas Kokkinos; Kevin Murphy; Alan L Yuille", "journal": "TPAMI", "ref_id": "b7", "title": "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs", "year": "2018" }, { "authors": "Mayee Chen; Daniel Y Fu; Avanika Narayan; Michael Zhang; Zhao Song; Kayvon Fatahalian; Christopher Ré", "journal": "PMLR", "ref_id": "b8", "title": "Perfectly balanced: Improving transfer and robustness of supervised contrastive learning", "year": "2022" }, { "authors": "Ting Chen; Simon Kornblith; Mohammad Norouzi; Geoffrey Hinton", "journal": "PMLR", "ref_id": "b9", "title": "A simple framework for contrastive learning of visual representations", "year": "2020" }, { "authors": "Xinlei Chen; Haoqi Fan; Ross Girshick; Kaiming He", "journal": "", "ref_id": "b10", "title": "Improved baselines with momentum contrastive learning", "year": "2020" }, { "authors": "Xinlei Chen; * ; Saining Xie; * ; Kaiming He", "journal": "", "ref_id": "b11", "title": "An empirical study of training self-supervised vision transformers", "year": "2021" }, { "authors": "Bowen Cheng; Alexander G Schwing; Alexander Kirillov", "journal": "NeurIPS", "ref_id": "b12", "title": "Per-pixel classification is not all you need for semantic segmentation", "year": "2021" }, { "authors": "Bowen Cheng; Ishan Misra; Alexander G Schwing; Alexander Kirillov; Rohit Girdhar", "journal": "", "ref_id": "b13", "title": "Masked-attention mask transformer for universal image segmentation", "year": "2022" }, { "authors": "Sanghyeok Chu; Dongwan Kim; Bohyung Han", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b14", "title": "Learning debiased and disentangled representations for semantic segmentation", "year": "2021" }, { "authors": "Marius Cordts; Mohamed Omran; Sebastian Ramos; Timo Rehfeld; Markus Enzweiler; Rodrigo Benenson; Uwe Franke; Stefan Roth; Bernt Schiele", "journal": "", "ref_id": "b15", "title": "The Cityscapes dataset for semantic urban scene understanding", "year": "2016" }, { "authors": "Jiequan Cui; Zhisheng Zhong; Shu Liu; Bei Yu; Jiaya Jia", "journal": "", "ref_id": "b16", "title": "Parametric contrastive learning", "year": "2021" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "", "ref_id": "b17", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Arthur Douillard; Matthieu Cord; Charles Ollion; Thomas Robert; Eduardo Valle", "journal": "", "ref_id": "b18", "title": "Podnet: Pooled outputs distillation for small-tasks incremental learning", "year": "2020" }, { "authors": "Arthur Douillard; Yifu Chen; Arnaud Dapogny; Matthieu Cord", "journal": "", "ref_id": "b19", "title": "Plop: Learning without forgetting for continual semantic segmentation", "year": "2021" }, { "authors": "Beyza Ermis; Giovanni Zappella; Martin Wistuba; Aditya Rawal; Cédric Archambeau", "journal": "", "ref_id": "b20", "title": "Continual learning with transformers for image classification", "year": "2022" }, { "authors": "Martin Ester; Hans-Peter Kriegel; Jörg Sander; Xiaowei Xu", "journal": "", "ref_id": "b21", "title": "A density-based algorithm for discovering clusters in large spatial databases with noise", "year": "1996" }, { "authors": "M Everingham; S M A Eslami; L Van Gool; C K I Williams; J Winn; A Zisserman", "journal": "IJCV", "ref_id": "b22", "title": "The pascal visual object classes challenge: A retrospective", "year": "2015" }, { "authors": "Robert French", "journal": "Trends in Cognitive Sciences", "ref_id": "b23", "title": "Catastrophic forgetting in connectionist networks", "year": "1999" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b24", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Kaiming He; Haoqi Fan; Yuxin Wu; Saining Xie; Ross Girshick", "journal": "", "ref_id": "b25", "title": "Momentum contrast for unsupervised visual representation learning", "year": "2020" }, { "authors": "Lukas Hoyer; Dengxin Dai; Luc Van Gool", "journal": "", "ref_id": "b26", "title": "DAFormer: Improving network architectures and training strategies for domain-adaptive semantic segmentation", "year": "2022" }, { "authors": "Salman Joseph; Fahad Khan; Shahbaz Khan; Vineeth N Balasubramanian", "journal": "", "ref_id": "b27", "title": "Towards open world object detection", "year": "2021" }, { "authors": "James Kirkpatrick; Razvan Pascanu; Neil Rabinowitz; Joel Veness; Guillaume Desjardins; Andrei A Rusu; Kieran Milan; John Quan; Tiago Ramalho; Agnieszka Grabska-Barwinska; Demis Hassabis; Claudia Clopath; Dharshan Kumaran; Raia Hadsell", "journal": "PNAS", "ref_id": "b28", "title": "Overcoming catastrophic forgetting in neural networks", "year": "2017" }, { "authors": "Jie Lei; Linjie Li; Luowei Zhou; Zhe Gan; Tamara L Berg; Mohit Bansal; Jingjing Liu", "journal": "", "ref_id": "b29", "title": "Less is more: Clipbert for video-and-language learning via sparse sampling", "year": "2021" }, { "authors": "Yunfan Li; Peng Hu; Zitao Liu; Dezhong Peng; Joey Tianyi Zhou; Xi Peng", "journal": "", "ref_id": "b30", "title": "Contrastive clustering", "year": "2021" }, { "authors": "Ziwei Liu; Zhongqi Miao; Xiaohang Zhan; Jiayun Wang; Boqing Gong; Stella X Yu", "journal": "", "ref_id": "b31", "title": "Large-scale long-tailed recognition in an open world", "year": "2019" }, { "authors": "David Lopez; - Paz; Marc'aurelio Ranzato", "journal": "", "ref_id": "b32", "title": "Gradient episodic memory for continual learning", "year": "2017" }, { "authors": "Umberto Michieli; Pietro Zanuttigh", "journal": "ICCVWS", "ref_id": "b33", "title": "Incremental learning techniques for semantic segmentation", "year": "2019" }, { "authors": "Umberto Michieli; Pietro Zanuttigh", "journal": "", "ref_id": "b34", "title": "Incremental learning techniques for semantic segmentation", "year": "2019" }, { "authors": "Xuan-Bac Nguyen; Duc Toan Bui; Chi Nhan Duong; Tien D Bui; Khoa Luu", "journal": "", "ref_id": "b35", "title": "Clusformer: A transformer based clustering approach to unsupervised large-scale face and visual landmark recognition", "year": "2021" }, { "authors": "Aaron Van Den Oord; Yazhe Li; Oriol Vinyals", "journal": "", "ref_id": "b36", "title": "Representation learning with contrastive predictive coding", "year": "2018" }, { "authors": "Firat Ozdemir; Orcun Goksel", "journal": "International journal of computer assisted radiology and surgery", "ref_id": "b37", "title": "Extending pretrained segmentation networks with additional anatomical structures", "year": "2019" }, { "authors": "Philipp Firat Ozdemir; Orcun Fuernstahl; Goksel", "journal": "Springer", "ref_id": "b38", "title": "Learn the new, keep the old: Extending pretrained models with new anatomy and images", "year": "2018" }, { "authors": "Minh Hieu Phan; The-Anh Ta; Son Lam Phung; Long Tran-Thanh; Abdesselam Bouzerdoum", "journal": "", "ref_id": "b39", "title": "Class similarity weighted knowledge distillation for continual semantic segmentation", "year": "2022" }, { "authors": "Yiqiao Qiu; Yixing Shen; Zhuohao Sun; Yanchong Zheng; Xiaobin Chang; Weishi Zheng; Ruixuan Wang", "journal": "Pattern Recognition", "ref_id": "b40", "title": "Sats: Self-attention transfer for continual semantic segmentation", "year": "2023" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b41", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": " Sylvestre-Alvise; Alexander Rebuffi; Georg Kolesnikov; Christoph H Sperl; Lampert", "journal": "", "ref_id": "b42", "title": "icarl: Incremental classifier and representation learning", "year": "2017" }, { "authors": "Cunjun Jiawei Ren; Shunan Yu; Xiao Sheng; Haiyu Ma; Shuai Zhao; Hongsheng Yi; Li", "journal": "", "ref_id": "b43", "title": "Balanced meta-softmax for long-tailed visual recognition", "year": "2020" }, { "authors": "Anthony Robins", "journal": "Connection Science", "ref_id": "b44", "title": "Catastrophic forgetting, rehearsal and pseudorehearsal", "year": "1995" }, { "authors": "Mohammad Rostami", "journal": "NeurIPS", "ref_id": "b45", "title": "Lifelong domain adaptation via consolidated internal distribution", "year": "2021" }, { "authors": "Antoine Saporta; Arthur Douillard; Tuan-Hung Vu; Patrick Pérez; Matthieu Cord", "journal": "", "ref_id": "b46", "title": "Multi-head distillation for continual unsupervised domain adaptation in semantic segmentation", "year": "2022" }, { "authors": "Konstantin Shmelkov; Cordelia Schmid; Karteek Alahari", "journal": "", "ref_id": "b47", "title": "Incremental learning of object detectors without catastrophic forgetting", "year": "2017" }, { "authors": "Christian Simon; Piotr Koniusz; Mehrtash Harandi", "journal": "", "ref_id": "b48", "title": "On learning the geodesic path for incremental learning", "year": "2021" }, { "authors": "Attila Szabó; Hadi Jamali-Rad; Siva-Datta Mannava", "journal": "", "ref_id": "b49", "title": "Tilted cross-entropy (tce): Promoting fairness in semantic segmentation", "year": "2021" }, { "authors": "Sebastian Thrun", "journal": "", "ref_id": "b50", "title": "Lifelong learning algorithms", "year": "1998" }, { "authors": "Thanh-Dat Truong; Ngan Le; Bhiksha Raj; Jackson Cothren; Khoa Luu", "journal": "", "ref_id": "b51", "title": "Fredom: Fairness domain adaptation approach to semantic scene understanding", "year": "2023" }, { "authors": "Thanh-Dat Truong; Hoang-Quan Nguyen; Bhiksha Raj; Khoa Luu", "journal": "NeurIPS", "ref_id": "b52", "title": "Fairness continual learning approach to semantic scene understanding in open-world environments", "year": "2023" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b53", "title": "Attention is all you need", "year": "2017" }, { "authors": "Riccardo Volpi; Diane Larlus; Grégory Rogez", "journal": "", "ref_id": "b54", "title": "Continual adaptation of visual representations via domain randomization and meta-learning", "year": "2021" }, { "authors": "Jiaqi Wang; Wenwei Zhang; Yuhang Zang; Yuhang Cao; Jiangmiao Pang; Tao Gong; Kai Chen; Ziwei Liu; Chen Change Loy; Dahua Lin", "journal": "", "ref_id": "b55", "title": "Seesaw loss for longtailed instance segmentation", "year": "2021" }, { "authors": "Enze Xie; Wenhai Wang; Zhiding Yu; Anima Anandkumar; Jose M Alvarez; Ping Luo", "journal": "NeurIPS", "ref_id": "b56", "title": "Segformer: Simple and efficient design for semantic segmentation with transformers", "year": "2021" }, { "authors": "Lei Yang; Xiaohang Zhan; Dapeng Chen; Junjie Yan; Chen Change Loy; Dahua Lin", "journal": "", "ref_id": "b57", "title": "Learning to cluster faces on an affinity graph", "year": "2019" }, { "authors": "Lei Yang; Dapeng Chen; Xiaohang Zhan; Rui Zhao; Chen Change Loy; Dahua Lin", "journal": "", "ref_id": "b58", "title": "Learning to cluster faces via confidence and connectivity estimation", "year": "2020" }, { "authors": "Ze Yang; Ruibo Li; Evan Ling; Chi Zhang; Yiming Wang; Dezhao Huang; Keng Teck Ma; Minhoe Hur; Guosheng Lin", "journal": "", "ref_id": "b59", "title": "Label-guided knowledge distillation for continual semantic segmentation on 2d images and 3d point clouds", "year": "2023" }, { "authors": "Chang-Bin Zhang; Jia-Wen Xiao; Xialei Liu; Ying-Cong Chen; Ming-Ming Cheng", "journal": "", "ref_id": "b60", "title": "Representation compensation networks for continual semantic segmentation", "year": "2022" }, { "authors": "Bolei Zhou; Hang Zhao; Xavier Puig; Sanja Fidler; Adela Barriuso; Antonio Torralba", "journal": "", "ref_id": "b61", "title": "Scene parsing through ade20k dataset", "year": "2017" }, { "authors": "Jianggang Zhu; Zheng Wang; Jingjing Chen; Yi-Ping Phoebe Chen; Yu-Gang Jiang", "journal": "", "ref_id": "b62", "title": "Balanced contrastive learning for long-tailed visual recognition", "year": "2022" }, { "authors": "", "journal": "", "ref_id": "b63", "title": "Relation to Knowledge Distillation Knowledge Distillation is a common approach to continual semantic segmentation", "year": "" }, { "authors": "", "journal": "", "ref_id": "b64", "title": "has shown that the clustering loss is an upper bound of the knowledge distillation loss. Formally, the knowledge distillation loss can be formed as follows: L distill (x t", "year": "" } ]
[ { "formula_coordinates": [ 3, 54.99, 445.52, 231.37, 14.84 ], "formula_id": "formula_0", "formula_text": "θ * t = arg min θ t E x t ,ŷ t ∈D t L CE y t , ŷt + λ CL L CL F (x t )(1)" }, { "formula_coordinates": [ 3, 341.43, 114.61, 48.32, 13.33 ], "formula_id": "formula_1", "formula_text": "{c i } N K +N U i=1" }, { "formula_coordinates": [ 3, 318.08, 190.83, 227.03, 47.99 ], "formula_id": "formula_2", "formula_text": "L CL F (x t ) = c i L Cont (F t , c i ) = c i h,w -ϕ(f t h,w , c i ) log exp(f t h,w × c i ) f ′ exp(f ′ × c i )(2)" }, { "formula_coordinates": [ 3, 468.62, 698.81, 65.14, 14.34 ], "formula_id": "formula_3", "formula_text": "ℓ i = exp(f t i ×c)" }, { "formula_coordinates": [ 4, 56.22, 284.48, 230.15, 26.84 ], "formula_id": "formula_4", "formula_text": "LCont(; , c) = - L i=1 log exp(f t i × c) f ′ exp(f ′ × c) = - L i=1 log ℓi (3)" }, { "formula_coordinates": [ 4, 207.99, 347.07, 40.72, 11.23 ], "formula_id": "formula_5", "formula_text": "ℓ i = L -1 ." }, { "formula_coordinates": [ 4, 50.65, 679.85, 235.71, 32.87 ], "formula_id": "formula_6", "formula_text": "L α Cont (; , c) = -α L i=1 log exp(f t i × c) f ′ exp(f ′ × c) -log exp(v × c) f ′ exp(f ′ × c)(4)" }, { "formula_coordinates": [ 4, 308.86, 371.61, 236.25, 23.18 ], "formula_id": "formula_7", "formula_text": "-1 + L major ) -1 -(α -1 + L minor ) -1 || < ||L -1" }, { "formula_coordinates": [ 5, 338.59, 176.2, 206.53, 31 ], "formula_id": "formula_8", "formula_text": "min Θ E c,{f c i } M i=1 [-log p(f c 1 , f c 2 , ..., f c M , c, Θ)] = min Θ E c,{f c i } M i=1 [-log p(∆ c 1 , ∆ c 2 , ..., ∆ c M , c, Θ)](5)" }, { "formula_coordinates": [ 5, 336.94, 212.85, 97.5, 12.34 ], "formula_id": "formula_9", "formula_text": "∆ c i = f c i -c. Eqn. (" }, { "formula_coordinates": [ 5, 308.86, 260.69, 236.25, 35.14 ], "formula_id": "formula_10", "formula_text": "{f i } M i=1 (cos(f i , c) ≥ cos(f i+1 , c)) to determine whether f i belonging to c, i.e., u = ϕ(∆ 1 , ∆ 2 , ..., ∆ M , c) where ∆ i = f i -c, u = [u 1 ," }, { "formula_coordinates": [ 5, 317.08, 337.53, 228.03, 13.97 ], "formula_id": "formula_11", "formula_text": "Θ * = arg min Θ E c,{f i } M i=1 [-log p(u|∆ 1 , ∆ 2 , ..., ∆ M , c, Θ)] (6)" }, { "formula_coordinates": [ 5, 319.09, 495.67, 226.02, 21.63 ], "formula_id": "formula_12", "formula_text": "z 0 = LN([∆ 1 , . . . , ∆ M , c]) +β, a l = z l + MHSA(z)) z l+1 = a l + MLP(LN(a l )), u = Proj(z L ϕ )(7)" }, { "formula_coordinates": [ 6, 50.11, 366.76, 236.25, 25.69 ], "formula_id": "formula_13", "formula_text": "arg min θ t E x t ,ŷ t L CE y t , ŷt + λ CL c i L α Cont F t , ci + λ C R C (c)(8)" }, { "formula_coordinates": [ 12, 73.41, 219.22, 212.96, 65.28 ], "formula_id": "formula_14", "formula_text": "min - L i=1 log exp(f t i × c) f ′ exp(f ′ × c) = - L i=1 log ℓ i subject to L i=1 ℓ i = ℓ (9)" }, { "formula_coordinates": [ 12, 67.41, 347.35, 218.95, 30.32 ], "formula_id": "formula_15", "formula_text": "L {ℓ i } L i=1 , λ = - L i=1 log ℓ i + λ( L i=1 ℓ i -ℓ)(10)" }, { "formula_coordinates": [ 12, 96.44, 438.63, 189.92, 85.44 ], "formula_id": "formula_16", "formula_text": "∂L {ℓ i } L i=1 , λ ∂ℓ i = -ℓ -1 i + λ = 0 ∂L {ℓ i } L i=1 , λ ∂λ = L i=1 ℓ i -ℓ = 0 ⇒ L {ℓ i } L i=1 , λ = -L log ℓ L(11)" }, { "formula_coordinates": [ 12, 143.36, 582.19, 61.86, 13.47 ], "formula_id": "formula_17", "formula_text": "ℓ i = ℓ L = L -1 ." }, { "formula_coordinates": [ 12, 370.61, 91.15, 174.5, 65.27 ], "formula_id": "formula_18", "formula_text": "min - L i=1 α log ℓ i -log ℓ v subject to L i=1 ℓ i + ℓ v = ℓ (12)" }, { "formula_coordinates": [ 12, 308.86, 194.06, 236.25, 39.91 ], "formula_id": "formula_19", "formula_text": "L {ℓ i } L i=1 , λ = - L i=1 α log ℓ i -log ℓ v +λ( L i=1 ℓ i +ℓ v -ℓ)(13" }, { "formula_coordinates": [ 12, 319.96, 266.03, 225.16, 124.01 ], "formula_id": "formula_20", "formula_text": "∂L {ℓ i } L i=1 , λ ∂ℓ i = -αℓ -1 i + λ = 0 ∂L {ℓ i } L i=1 , λ ∂ℓ v = -ℓ -1 v + λ = 0 ∂L {ℓ i } L i=1 , λ ∂λ = L i=1 ℓ i + ℓ v -ℓ = 0 ⇒ L {ℓ i } L i=1 , λ = -αL log αℓ 1 + αL -log ℓ 1 + αL(14)" }, { "formula_coordinates": [ 14, 352.13, 210.53, 192.98, 49 ], "formula_id": "formula_21", "formula_text": "L distill ≤ 1 |C 1..T | c   L(F t , c) L Cont +L(c, F t-1 )   (16)" }, { "formula_coordinates": [ 14, 308.86, 362.01, 233.64, 98.71 ], "formula_id": "formula_22", "formula_text": "L distill = O L 1 |C 1..T | Constant c L(F t , c) L Cont + L(c, F t-1 ) Constant = O c L(F t , c) L Cont ⇒ L distill (F t-1 , F t ) = O L Cont (F t , c) (17" }, { "formula_coordinates": [ 14, 541.79, 453.81, 3.32, 6.91 ], "formula_id": "formula_23", "formula_text": ")" } ]
2024-03-28
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b28", "b36", "b7", "b12", "b20", "b25", "b12", "b12" ], "table_ref": [], "text": "3D localization [18,28] using natural language descriptions in a city-scale map is crucial for enabling autonomous agents to cooperate with humans to plan their trajectories [11] in applications such as goods delivery or vehicle pickup [36,38]. When delivering a takeaway, couriers often encounter the \"last mile problem\". Pinpointing the exact delivery spot in residential neighborhoods or large office buildings is challenging since GPS signals are bound to fail among tall buildings and vegetation [34,37]. Couriers often rely on voice instructions over the phone from the recipient to determine this spot. More generally, the \"last mile problem\" occurs whenever a user attempts to navigate to an unfamiliar place. It is therefore essential to develop the capability to perform localization from the natural language, as shown in Fig. 1.\nAs a possible remedy, we can match linguistic descriptions to a pre-built point cloud map using calibrated depth sensors like LiDAR. Point cloud localization, which focuses on the scene's geometry, offers several advantages over images. It remains consistent despite lighting, weather, and season changes, whereas the same geometric structure in images might appear vastly different.\nThe main challenge of 3D localization from natural language descriptions lies in accurately interpreting the language and semantically understanding large-scale point clouds. To date, only a few networks have been proposed for language-based localization in a 3D large-scale city map. Text2Pose [12] is a pioneering work that aligns objects described in text with their respective instances in a point cloud, through a coarse-to-fine approach. In the coarse stage, Text2Pose first adopts a text-to-cell crossmodel retrieval method to identify the possible regions that contain the target position. In particular, Text2Pose matches the text and the corresponding submaps by the global descriptors from 3D point clouds using PointNet++ [20] and the global text descriptors using a bidirectional LSTM cell [10,25]. This method describes a submap with its contained instances of objects, which ignores the instance relationship for both points and sentences. Recently, the authors of RET [33] noted this shortcoming and designed Relation-Enhanced Transformer networks. While this results in better global descriptors, both approaches match global descriptors using the pairwise ranking loss without considering the imbalance in positive and negative samples.\nInspired by RET [33], we also notice the importance of effectively leveraging relational dynamics among instances within submaps for geometric representation extraction. Furthermore, there is a natural hierarchy in the descriptions, composed of sentences, each with word tokens. We thus recognize the need to analyze relationships within (intratext) and between (inter-text) descriptions. To address these challenges, we adopt a frozen pre-trained large language model T5 [23] and design a hierarchical transformer with max-pooling (HTM) that acts as an intra-and inter-text encoder, capturing the contextual details within and across sentences. Additionally, we enhance the instance encoder in Text2Pose [12] by adding a number encoder and adopting contrastive learning to maintain a balance between positive and negative pairs. Another observation is that, when refining the location prediction in the fine localization stage, the widely used text-instance matching module in previous methods should be reduced since the noisy matching or inaccurate offset predictions are a fatal interference in predicting the exact position of the target. To address this issue, we propose a novel matching-free fine localization network. Specifically, we first design a prototype-based map cloning (PMC) module to increase the diversity of retrieved submaps. Then, we introduce a cascaded cross-attention transformer (CCAT) to enrich the text embedding by fusing the semantic information from point clouds. These operations enable one-stage training to directly predict the target position without any text-instance matcher.\nTo summarize, the main contributions of this work are: • We focus on the relatively-understudied problem of point cloud localization from textual descriptions, to address the \"last mile problem\". • We propose a novel attention-based method that is hierar-chical and represents contextual details within and across sentence descriptions of places.\n• We study the importance of positive-negative pairs balance in this setting, and show how contrastive learning is an effective tool that significantly improves performance. • We are the first to completely remove the usage of textinstance matcher in the final localization stage. We propose a lighter and faster localization model while still achieving state-of-the-art performance via our designed prototype-based map cloning (PMC) module in training and cascaded cross-attention transformer (CCAT). • We conduct extensive experiments on the KITTI360Pose benchmark [12] and show that the proposed Text2Loc greatly improves over the state-of-the-art methods. 𝑇!!\"#$%\"&'(%\")(\"*%(+\"',\"-\"./-0\"&-/1)2.3\" 𝑇\"!\"#$%\"&'(%\")(\"%-(+\"',\"-\"4-/15./%%2\"6%.%+-+)'2 𝑇#!\"#$%\"&'(%\")(\"('7+$\"',\"-\"./-05./%%2\"&'8%3\" 𝑇$!\"#$%\"&'(%\")(\"2'/+$\"',\"-\"98-:1\"&'8%3\" 𝑇%!\"#$%\"&'(%\")(\"*%(+\"',\"-\"98-:1\"6%.%+-+)'23" }, { "figure_ref": [], "heading": "Related work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Global place recognition", "publication_ref": [ "b41", "b13", "b14", "b21", "b7", "b0", "b9", "b39", "b12" ], "table_ref": [], "text": "Top-k submaps Given a text-based position description, we first identify a set of coarse candidate locations, \"submaps,\" potentially containing the target position. This is achieved by retrieving the top-k nearest submaps from a previously constructed database of submaps using our novel text-to-submap retrieval model. Fine localization. We then refine the center coordinates of the retrieved submaps via our designed matching-free position estimation module, which adjusts the target location to increase accuracy.\nPointNet to generate point-wise local descriptors. Furthermore, various methods [3,6,8,16,17,40,41] have explored the integration of different transformer networks, specifically stacked self-attention blocks, to learn longrange contextual features. In contrast, Minkloc3D [13] employs a voxel-based strategy to generate a compact global descriptor using a Feature Pyramid Network [14] (FPN) with generalized-mean (GeM) pooling [21]. However, the voxelization methods inevitably suffer from lost points due to the quantization step. CASSPR [37] thus introduces a dual-branch hierarchical cross attention transformer, combining both the advantages of voxel-based approaches with the point-based approaches. After getting the coarse location of the query scan, the pose estimation can be computed with the point cloud registration algorithms, like the iterative closest point (ICP) [30] or autoencoder-based registration [7]. By contrast to point cloud based localization, we use natural language queries to specify any target location.\n3D vision and language. Recent work has explored the cross-modal understanding of 3D vision and language. [19] bridges language implicitly to 3D visual feature representations and predicts 3D bounding boxes for target objects. Methods [1,4,9,39] locate the most relevant 3D target objects in a raw point cloud scene given by the query text descriptions. However, these methods focus on real-world indoor scene localization. Text2Pos [12] is the first attempt to tackle the large city-scale outdoor scene localization task, which identifies a set of coarse locations and then refines the pose estimation. Following this, Wang et al. [33] propose a Transformer-based method to enhance representation discriminability for both point clouds and textual queries." }, { "figure_ref": [], "heading": "Problem statement", "publication_ref": [ "b12" ], "table_ref": [], "text": "We begin by defining the large-scale 3D map M ref = {m i : i = 1, ..., M } to be a collection of cubic submaps m i . Each submap m i = {P i,j : j = 1, ..., p} includes a set of 3D object instances P i,j . Let T be a query text description consisting of a set of hints { ⃗ h k } h k=1 , each describing the spatial relation between the target location and an object instance. Following [12], we approach this task in a coarse-to-fine manner. The text-submap global place recognition involves the retrieval of submaps based on T . This stage aims to train a function F , which encodes both T and a submap m into a unified embedding space. In this space, matched query-submap pairs are brought closer together, while unmatched pairs are repelled. In fine-grained localization, we employ a matching-free network to directly regress the final position of the target based on T and the retrieved submaps. Thus, the task of training a 3D localization network from natural language is defined as identifying the ground truth position (x, y) (2D planar coordinates w.r.t. the scene coordinate system) from\nM ref : min ϕ,F E (x,y,T )∼D (x, y) -ϕ T, argmin m∈Mref d (F (T ), F (m)) 2 (1)\nwhere d(•, •) is a distance metric (e.g. the Euclidean distance), D is the dataset, and ϕ is a neural network that is trained to output fine-grained coordinates from a text embedding T and a submap m." }, { "figure_ref": [ "fig_1" ], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "Fig. 2 shows our Text2Loc architecture. Given a text-based query position description, we aim to find a set of coarse candidate submaps that potentially contain the target position by using a frozen pre-trained T5 language model [23] and an intra-and inter-text encoder with contrastive learning, described in Section 4.1. Next, we refine the location based on the retrieved submaps via a designed fine localization module, which will be explained in Section 4.2. Section 4.3 describes the training with the loss function." }, { "figure_ref": [], "heading": "Global place recognition", "publication_ref": [], "table_ref": [], "text": "3D point cloud-based place recognition is usually expressed as a 3D retrieval task. Given a query LiDAR scan, the aim is to retrieve the closest match and its corresponding location from the database by matching its global descriptor against the global descriptors extracted from a database of reference scans based on their descriptor distances. Following this general approach, we adopt the text-submap cross-modal" }, { "figure_ref": [], "heading": "Textural position descriptions", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2" ], "heading": "Pretrained T5 model Intra-and intertext encoder", "publication_ref": [ "b12" ], "table_ref": [], "text": "Max Pooling global place recognition for coarse localization. With this stage, we aim to retrieve the nearest submap in response to a textual query. The main challenge lies in how to find simultaneously robust and distinctive global descriptors for 3D submaps S and textual queries T . Similar to [12,33], we employ a dual branch to encode S and T into a shared embedding space, as shown in Fig. 3 \n𝑆!• 𝑇! 𝑆!• 𝑇\" 𝑆!• 𝑇# ⋯ 𝑆!• 𝑇$ 𝑆\"• 𝑇! 𝑆\"• 𝑇\" 𝑆\"• 𝑇# ⋯ 𝑆\"• 𝑇$ 𝑆#• 𝑇! 𝑆#• 𝑇\" 𝑆#• 𝑇# ⋯ 𝑆#• 𝑇$ ⋮ ⋮ ⋮ ⋱ ⋮ 𝑆$• 𝑇! 𝑆%• 𝑇\" 𝑆%• 𝑇# ⋯ 𝑆$• 𝑇$ 𝑇 ! 𝑇 \" … 𝑇 # 𝑇 $ ⋮ S ! S \" S # S $ … Instances in submaps" }, { "figure_ref": [ "fig_2", "fig_2", "fig_2" ], "heading": "(top).", "publication_ref": [ "b20", "b12", "b22" ], "table_ref": [], "text": "Text branch. We initially use a frozen pre-trained large language model, T5 [23], to extract nuanced features from textual descriptions, enhancing the embedding quality. We then design a hierarchical transformer with max-pooling layers to capture the contextual details within sentences (via self-attention) and across them (via the semantics that are shared by all sentences), as depicted in Fig. 3 (Bottom right). Each transformer is a residual module comprising Multi-Head Self-Attention (MHSA) and FeedForward Network (FFN) sublayers. The feed-forward network comprises two linear layers with the ReLU activation function. More details are in the Supplementary Materials.\n3D submap branch. Each instance P i in the submap S N is represented as a point cloud, containing both spatial and color (RGB) coordinates, resulting in 6D features (Fig. 3 (bottom left)). We utilize PointNet++ [20] (which can be replaced with a more powerful encoder) to extract semantic features from the points. Additionally, we obtain a color embedding by encoding RGB coordinates with our color encoder and a positional embedding by encoding the instance center Pi (i.e., the mean coordinates) with our positional encoder. We find that object categories consistently differ in point counts; for example, roads typically (> 1000 points) have a higher point count than poles (< 500 points). We thus design a number encoder, providing potential classspecific prior information by explicitly encoding the point numbers. All the color, positional, and number encoders are 3-layer multi-layer perceptrons (MLPs) with output dimensions matching the semantic point embedding dimension. We merge the semantic, color, positional, and quantity embeddings through concatenation and process them with a projection layer, another 3-layer MLP. This projection layer produces the final instance embedding F pi . Finally, we aggregate in-submap instance descriptors {F pi } Np i=1 into a global submap descriptor F S using an attention layer [35] followed by a max pooling operation.\nText-submap Contrastive learning. We introduce a cross-modal contrastive learning objective to address the limitations of the widely used pairwise ranking loss in [12,33]. This objective aims to jointly drive closer the feature centroids of 3D submaps and the corresponding text prompt. In our overall architecture, illustrated in Figure 3, we incorporate both a text encoder and a point cloud encoder. These encoders serve the purpose of embedding the text-submap pairs into text features denoted as F T ∈ R 1×C and 3D submap features represented as F S ∈ R 1×C , respectively. Here, C signifies the embedding dimension. Inspired by CLIP [22], we computer the feature distance between language descriptions and 3D submaps with a contrastive learning loss (See Sec. 4.3 for details)." }, { "figure_ref": [ "fig_3" ], "heading": "Fine localization", "publication_ref": [ "b12", "b7", "b7" ], "table_ref": [], "text": "Following the text-submap global place recognition, we aim to refine the target location prediction within the retrieved submaps in fine localization. Although the final localization network in previous methods [12,33] achieved notable success using a text-submap matching strategy, the inherent ambiguity in the text descriptions significantly impeded accurate offset predictions for individual object instances. To address this issue, we propose a novel matching-free fine localization network, as shown in Fig. 4. The text branch (top) captures the fine-grained features by using a frozen pre-trained language model T5 [23] and an attention unit followed by a max pooling layer. The submap branch (bot-tom) performs a prototype-based map cloning module to increase more map variants and then extracts the point cloud features using an instance encoder, the same as in the global place recognition. We then fuse the text-submap feature with a Cascaded Cross-Attention Transformer and finally regress the target position via a simple MLP.\nCascaded Cross-attention Transformer (CCAT). To efficiently exploit the relationship between the text branch and the 3D submap branch, we propose a CCAT to fuse the features from the two branches. The CCAT consists of two Cross Attention Transformers (CAT), each is the same as in [37]. The CAT1 takes the point cloud features as Query and the text features as Key and Value. It extracts text features with reference to the point features and outputs point feature maps that are informed by the text features. Conversely, CAT2 produces enhanced text features by taking the text features as the Query and the enhanced point cloud features from CAT1 as the Key and Value. Notably, the CAT1 and the CAT2 are a cascading structure, which is the main difference from the HCAT in [37]. In this work, two cascaded CCATs are used. More ablation studies and analyses are in the Supplementary Materials.\nPrototype-based Map Cloning (PMC). To produce more effective submap variants for training, we propose a novel prototype-based map cloning module. For each pair {T i , S i }, we hope to generate a collection G i of surrounding map variants centered on the current map S i , which can be formulated as follows:\nG i = {S j | sj -si ∞ < α, sj -c i ∞ < β },(2)\nwhere si , sj are the center coordinates of the submaps S i and S j respectively. c i represents the ground-truth target position described by T i , α and β are the pre-defined thresholds. In this work, we set α = 15 and β = 12.\nIn practice, we find that certain submaps in G i have an insufficient number of object instances corresponding to the textual descriptions T i . To address this, we introduce a filtering process by setting a minimum threshold N m = 1. This threshold implies that at most one instance mismatch is permissible. After applying this filter, we randomly selected a single submap from the refined G i for training." }, { "figure_ref": [], "heading": "Loss function", "publication_ref": [ "b12", "b22", "b12" ], "table_ref": [], "text": "Global place recognition. Different from the pairwise ranking loss widely used in previous methods [12,33], we train the proposed method for text-submap retrieval with a cross-model contrastive learning objective. Given an input batch of 3D submap descriptors {F S i } N i=1 and matching text descriptors {F T i } N i=1 where N is the batch size, the con-trastive loss among each pair is computed as follows,\nl(i, T, S) = -log exp(F T i • F S i /τ ) j∈N exp(F T i • F S j /τ ) -log exp(F S i • F T i /τ ) j∈N exp(F S i • F T j /τ ) ,(3)\nwhere τ is the temperature coefficient, similar to CLIP [22]. Within a training mini-batch, the text-submap alignment objective L(T, S) can be described as:\nL(T, S) = 1 N i∈N l(i, T, S) .(4)\nFine localization. Unlike previous method [12,33], our fine localization network does not include a text-instance matching module, making our training more straightforward and faster. Note that this model is trained separately from the global place recognition. Here, our goal is to minimize the distance between the predicted location of the target and the ground truth. In this paper, we use only the mean squared error loss L r to train the translation regressor.\nL(C gt , C pred ) = C gt -C pred 2 ,(5)\nwhere C pred = (x, y) (see Eq. ( 1)) is the predicted target coordinates, and C gt is the ground-truth coordinates." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Benchmark Dataset", "publication_ref": [ "b12", "b12", "b12" ], "table_ref": [], "text": "We train and evaluate the proposed Text2Loc on the KITTI360Pose benchmark presented in [12]. It includes point clouds of 9 districts, covering 43,381 position-query pairs with a total area of 15.51 km 2 . Following [12], we choose five scenes (11.59 km 2 ) for training, one for validation, and the remaining three (2.14 km 2 ) for testing. The 3D submap is a cube that is 30m long with a stride of 10m. This creates a database with 11,259/1,434/4,308 submaps for training/validation/testing scenes and a total of 17,001 submaps for the entire dataset. For more details, please refer to the supplementary material in [12]." }, { "figure_ref": [], "heading": "Evaluation criteria", "publication_ref": [ "b12" ], "table_ref": [], "text": "Following [12], we use Retrieve Recall at Top k (k ∈ {1, 3, 5}) to evaluate text-submap global place recognition.\nFor assessing localization performance, we evaluate with respect to the top k retrieved candidates (k ∈ {1, 5, 10}) and report localization recall. Localization recall measures the proportion of successfully localized queries if their error falls below specific error thresholds, specifically ϵ < 5/10/15m by default." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Global place recognition", "publication_ref": [ "b12" ], "table_ref": [ "tab_2" ], "text": "We compare our Text2Loc with the state-of-the-art methods: Text2Pos [12] and RET [33]. We evaluate global place \nk = 1 k = 3 k = 5 k = 1 k = 3 k = 5\nText2Pos recognition performance on the KITTI360Pose validation and test set for a fair comparison. Table 2 shows the top-1/3/5 recall of each method. The best performance on the validation set reaches the recall of 0.32 at top-1. Notably, this outperforms the recall achieved by the current state-ofthe-art method RET by a wide margin of 78%. Furthermore, Text2Loc achieves recall rates of 0.56 and 0.67 at top-3 and top-5, respectively, representing substantial improvements of 65% and 52% relative to the performance of RET. These improvements are also observed in the test set, indicating the superiority of the method over baseline approaches. Note that we report only the values available in the original publication of RET. These improvements demonstrate the efficacy of our proposed Text2Loc in capturing cross-model local information and generating more discriminative global descriptors. More qualitative results are given in Section 6.2." }, { "figure_ref": [], "heading": "Fine localization", "publication_ref": [ "b12", "b12" ], "table_ref": [ "tab_1" ], "text": "To improve the localization accuracy of the network, [12,33] further introduce fine localization. To make the comparisons fair, we follow the same setting in [12,33] to train our fine localization network. As illustrated in Table 1, we report the top-k (k = 1/5/10) recall rate of different error thresholds ϵ < 5/10/15m for comparison. Text2Loc achieves the top-1 recall rate of 0.37 on the validation set and 0.33 on the test set under error bound ϵ < 5m, which are 95% and 2 × higher than the previous state-of-the-art RET, respectively. Furthermore, our Text2Loc performs consistently better when relaxing the localization error constraints or increasing k. This demonstrates that Text2Loc can accurately interpret the text descriptions and semantically understand point clouds better than the previous state-of-the-art methods. We also show some qualitative results in Section 6.2." }, { "figure_ref": [], "heading": "Performance analysis 6.1. Ablation study", "publication_ref": [ "b12", "b12", "b12" ], "table_ref": [], "text": "The following ablation studies evaluate the effectiveness of different components of Text2Loc, including both the textsubmap global place recognition and fine localization.\nGlobal place recognition. To assess the relative contribution of each module, we remove the frozen pre-trained large language model T5, hierarchical transformer with max-pooling (HTM) module in the text branch, and number encoder in the 3D submap branch from our network one by one. We also analyze the performance of the proposed textsubmap contrastive learning. All networks are trained on the KITTI360Pose dataset, with results shown in Table . 3. Utilizing the frozen pre-trained LLM T5, we observed an approximate 8% increase in retrieval accuracy at top 1 on the test set. While the HTM notably enhances performance on the validation set, it shows marginal improvements on the test set. Additionally, integrating the number encoder has led to a significant 6% improvement in the recall metric at top 1 on the validation set. Notably, the performance on the validation/test set reaches 0.32/0.28 recall at top 1, exceeding the same model trained with the pairwise ranking loss by 52% and 40%, respectively, highlighting the superiority of the proposed contrastive learning approach.\nFine localization. To analyze the effectiveness of each proposed module in our matching-free fine-grained localization, we separately evaluate the Cascaded Cross-Attention Transformer (CCAT) and Prototype-based Map Cloning (PMC) module, denoted as Text2Loc CCAT and Text2Loc PMC. For a fair comparison, all methods utilize the same submaps retrieved from our global place recognition. The results are shown in Table . 4. Text2Pos* significantly outperforms the origin results of Text2Pos [12], indicating the superiority of our proposed global place recognition. Notably, replacing the matcher in Text2Pos [12] with our CCAT results in about 7% improvements at top 1 on the test set. We also observe the inferior performance of Text2Loc PMC to the proposed method when interpreting only the proposed PMC module into the Text2Pos [12] fine Methods Submap Retrieval Recall ↑" }, { "figure_ref": [], "heading": "Validation Set", "publication_ref": [], "table_ref": [], "text": "Test Set localization network. The results are consistent with our expectations since PMC can lead to the loss of object instances in certain submaps (See Supp.). Combining both modules achieves the best performance, improving the performance by 10% at top 1 on the test set. This demonstrates adding more training submaps by PMC is beneficial for our matching-free strategy without any text-instance matches.\nk = 1 k = 3 k = 5 k = 1 k = 3 k = 5 w/o" }, { "figure_ref": [ "fig_4" ], "heading": "Qualitative analysis", "publication_ref": [], "table_ref": [], "text": "In addition to quantitative results, we show some qualitative results of two correctly point cloud localization from text descriptions and one failure case in Fig. 5. Given a query text description, we visualize the ground truth, top-3 retrieved submaps, and our fine localization results. In textsubmap global place recognition, a retrieved submap is defined as positive if it contains the target location. Text2Loc excels in retrieving the ground truth submap or those near in most cases. However, there are instances where negative submaps are retrieved, as observed in (b) with the top 3. Text2Loc showcases its ability to predict more accurate locations based on positively retrieved submaps in fine localization. We also present one failure case in (c), where all retrieved submaps are negative. In these scenarios, our fine localization struggles to predict accurate locations, highlighting its reliance on the coarse localization stage. An additional observation is that despite their distance from the target location, all these negative submaps contain instances similar to the ground truth. These observations show the challenge posed by the low diversity of outdoor scenes, emphasizing the need for highly discriminative representations to effectively disambiguate between submaps." }, { "figure_ref": [ "fig_4" ], "heading": "Computational cost analysis", "publication_ref": [ "b12", "b12" ], "table_ref": [], "text": "In this section, we analyze the required computational resources of our coarse and matching-free fine localization network regarding the number of parameters and time ef- T5) and 1.84 M parameters respectively. For fine localization, we replace the proposed matching-free CCAT module with the text-instance matcher in [12,33], denoted as Text2Loc Matcher. From Table . 5, we observe that Text2Loc is nearly two times more parameter-efficient than the baselines [12,33] and only uses their 5% inference time.\nLocalization Recall (ϵ < 5m) ↑ Methods Validation Set Test Set k = 1 k = 5 k = 10 k = 1 k = 5 k = 10\nThe main reason is that the previous methods adopt Superglue [27] as a matcher, which resulted in a heavy and timeconsuming process. Besides, our matching-free architecture prevents us from running the Sinkhorn algorithm [5]. These improvements significantly enhance the network's efficiency without compromising its performance." }, { "figure_ref": [], "heading": "Robustness analysis", "publication_ref": [], "table_ref": [], "text": "In this section, we analyze the effect of text changes on localization accuracy. For a clear demonstration, we only change one sentence in the query text descriptions, denoted as Text2Loc modified. All networks are evaluated on the KITTI360Pose test set, with results shown in Table . 6. Text2Loc modified only achieves the recall of 0.15 at top-1 retrieval, indicating our text-submap place recognition network is very sensitive to the text embedding. We also observe the inferior performance of Text2Loc modified in the fine localization. More qualitative results are in the Supplementary Materials." }, { "figure_ref": [ "fig_5" ], "heading": "Embedding space analysis", "publication_ref": [ "b31", "b12" ], "table_ref": [], "text": "We employ T-SNE [31] to visually represent the learned embedding space, as illustrated in Figure 6. The baseline method Text2Pos [12] yields a less discriminative space, with positive submaps often distant from the query text descriptions and even scattered across the embedding space. In contrast, our method brings positive submaps and query text representations significantly closer together within the embedding distance. It shows that the proposed network indeed results in a more discriminative cross-model space for recognizing places." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We proposed Text2Loc for 3D point cloud localization based on a few natural language descriptions. In global place recognition, we capture the contextual details within and across text sentences with a novel attention-based method and introduce contrastive learning for the textsubmap retrieval task. In addition, we are the first to propose a matching-free fine localization network for this task, which is lighter, faster, and more accurate. Extensive experiments demonstrate that Text2Loc improves the localization performance over the state-of-the-art by a large margin. Future work will explore trajectory planning in real robots." }, { "figure_ref": [], "heading": "Text2Loc: 3D Point Cloud Localization from Natural Language", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Overview", "publication_ref": [ "b12" ], "table_ref": [], "text": "In this supplementary material, we provide more experiments on the KITTI360Pose dataset [12] to demonstrate the effectiveness of our Text2Loc and show more insights we gathered during the development. We first present thorough ablation experiments to study the impact of the proposed CCAT on the fine localization performance in Sec. B. In Sec. C, we provide qualitative results of top-3 candidate submaps retrieved and localization performance when changing one sequence in the query textural descriptions. Next, we describe implementation details about our network architecture in Sec. D and analysis of the proposed PMC module in Sec. E. Finally, Sec. F shows more visualizations of point cloud localization from text descriptions." }, { "figure_ref": [], "heading": "B. More analysis of Cascaded Cross-Attention Transformers", "publication_ref": [ "b7", "b7", "b7", "b7" ], "table_ref": [ "tab_8" ], "text": "In this section, we first explore the performance of different numbers of Cascaded Cross-Attention Transformers (CCAT) in our fine localization network. We further provide a comparison to study the difference between our CCAT and Hierarchical Cross-Attention Transformer (HCAT) in [37].\nNumber of CCAT. We insert CCAT one by one before the MLP layer in Text2Loc. '0' means using a single Cross Attention Transformer (CAT) to fuse text and 3D point cloud features. Table 7 shows the localization performance of our Tex2Loc with different numbers of CCAT units. As seen from the table, Text2Loc achieves the best performance with 2 CCAT units. When the number expands to 3, the performance degrades. This implies that the text-submap feature fusion is sufficient with fewer CCAT units. On the other hand, when the number is set to 1, the performance decreases. Therefore, we set the fixed number of CCAT as 2 in our network.\nDifference with HCAT. Recent work CASSPR [37] has explored the integration of 3D point-wise features with voxelized representations through a designed Hierarchical Cross-Attention Transformer (HCAT). In HCAT, two parallel Cross Attention Transformers (CAT1 and CAT2) process inputs from different branches (point and voxel), each serving as query and key respectively. In contrast, our Cascaded Cross-Attention Transformer (CCAT) employs a sequential, cascaded structure to merge text and point cloud cross-modal information. Notably, in our CCAT, the second CAT utilizes the output of the first CAT as its key and value, distinguishing it from the parallel architecture of HCAT. different modules within our Text2Loc architecture. Utilizing the proposed CCAT, we observed an approximate 4% increase in retrieval accuracy at top 10 on the test set. This table demonstrates a consistently superior performance of our CCAT compared to the HCAT used in [37]. Motivation of CCAT. The motivation for the CCAT module in fine localization arose from the challenge of target position regression based on the text descriptions. Encoding accurate textual features is crucial for regression since the model directly predicts target positions, without any textinstance matcher. We thus design a cascade structure to enhance text features with the information from retrieved point clouds. The HCAT [37] module, in contrast, aims to compensate for the quantization losses for the LiDARbased place recognition task. HCAT should ensure that each branch is useful in isolation, thus preventing one branch from dominating over the other." }, { "figure_ref": [], "heading": "C. Visualization of robustness analysis", "publication_ref": [], "table_ref": [], "text": "Fig. 7 visualizes some qualitative results for Sec. 6.4. For each instance, we display the original query text descriptions along with the top 3 retrieved submaps and their final predicted locations at the top, followed by modified queries (highlighted in red) and their results at the bottom. In the first example, we cannot find the positive submaps in the top-3 matches, leading to a complete localization failure. In the second example, even though we identify the positive" }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgements. This work was supported by the ERC Advanced Grant SIMULACRON, by the Munich Center for Machine Learning, and by the Royal Academy of Engineering (RF\\201819\\18\\163)." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "The pose is east of a beige sidewalk. The pose is south of a beige wall. The pose is west of a black fence. The pose is west of black vegetation. The pose is north of a black terrain." }, { "figure_ref": [], "heading": "Text descriptions Top 1 Top 2 Top 3", "publication_ref": [], "table_ref": [], "text": "Fine localization" }, { "figure_ref": [], "heading": "Global place recognition", "publication_ref": [], "table_ref": [], "text": "The pose is on top of a gray road. The pose is east of a beige sidewalk. The pose is on top of a bright-gray vegetation. The pose is west of a black fence. The pose is west of black vegetation. The pose is north of a black terrain.\nThe pose is on top of black vegetation. The pose is north of black vegetation. The pose is east of a gray-green lamp. The pose is south of § dark-green sidewalk. The pose is north of a black trash bin. The pose is east of a dark-green box.\nThe pose is on top of black vegetation. The pose is north of black vegetation. The pose is east of a dark-green box. The pose is south of a dark-green sidewalk. The pose is north of a black trash bin. The pose is east of a dark-green box. submaps in the global place recognition, the exact localization is still off. The results are consistent with our expectation that accurate text embedding is essential for predicting the target location in fine localization." }, { "figure_ref": [], "heading": "D. Implementation Details", "publication_ref": [ "b20", "b12", "b12" ], "table_ref": [], "text": "We train the model with Adam optimizer for the textsubmap global place recognition with a learning rate (LR) of 5e-4. The model is trained for a total 20 epochs with batch size 64, and we follow a multi-step training schedule wherein we decay LR by a factor of 0.4 at each 7 epoches. The temperature coefficient τ is set to 0.1. We consider each submap to contain a constant 28 object instances. The intra-and inter-text encoder in the text branch has 1 encoder layer respectively. We utilize PointNet++ [20] from [12] to encode every individual instance within the submap. In all quantitative results relating to global place recognition, we adopt the definition of the ground truth (GT) submap as [12], where it refers to the submap in the database that contains textual descriptions of targets, with its center point closest to the target. For the fine localization network, we train the model with an LR of 3e-4 for 35 epochs with batch size 32. To make a fair comparison, we set the embedding dimension for both text and submap branch as 256 in global place recognition and 128 in fine localization. The code is available for reproducibility.\nTransformer in global place recognition. Formally, each transformer with max-pooling in the proposed intra-and inter-text encoder can be formulated as follows:\nwhere\nthe query, key, and value matrices.\nWithin the MHSA layer, self-attention is conducted by projecting Q, K, and V using h heads, with our choice being h = 4. More precisely, we initially calculate the weight matrix using scaled dot-product attention [32], as in Eq. 7:\nSubsequently, we compute the values for the h heads and concatenate them together as follows:\nwhere W Q,K,V,O i denote the learnable parameters. " }, { "figure_ref": [], "heading": "E. Analysis of PMC module", "publication_ref": [ "b12" ], "table_ref": [], "text": "PMC can be seen as a data augmentation. However, this augmentation is not suitable for the previous text-instance matcher in Text2Pos [12] and RET [33] since PMC can lead to the loss of object instances in certain submaps (see Fig. 8 above); thereby, solely integrating the PMC into Text2Pos results in performance degradation. Conversely, adding more training submaps by PMC benefits our Text2Loc since we adopt a matching-free strategy without any text-instance matches." }, { "figure_ref": [], "heading": "F. More visualization results", "publication_ref": [], "table_ref": [], "text": "In this section, we visualize more examples of correct point cloud localization from text descriptions and failure cases in Fig. 9. For (a) and (b), Text2Loc successfully retrieves all positive submaps within the top-3 results during global place recognition. We observe that these top-3 retrieved submaps display a high degree of semantic similarity to both the ground truth and each other. In cases of (c) -(e), despite some of the top-3 submaps being negatives retrieved by our text-submap place recognition, Text2Loc effectively localizes the text queries within a 5 m range after applying the fine localization network. It demonstrates our fine localization network can improve the localization recall, which turns such wrong cases in place recognition into a successful localization.\nWe also present some failure cases where all retrieved submaps are negative. For example, in case (g), the query text description contains an excessive number of objects of the same category 'Pole'. This description ambiguity poses a significant challenge to our place recognition network, leading to the retrieval of incorrect submaps. In the future, We hope to investigate more precise and accurate text descriptions, like integrating specific landmark information, including street names, zip codes, and named buildings, into text-based localization networks.\nThe pose is on top of a gray-green road. The pose is north of a gray sidewalk. The pose is west of a black wall. The pose is south of a green fence. The pose is south of a dark-green pole. The pose is east of a dark-green traffic light.\nThe pose is on top of a gray road. The pose is south of a gray parking. The pose is west of a black fence. The pose is east of black vegetation. The pose is east of a gray-green terrain. The pose is west of a dark-green building.\nThe pose is on top of a gray road. The pose is north of a gray sidewalk. The pose is east of a dark-green fence. The pose is west of a green terrain. The pose is south of a black pole. The pose is north of a dark-green terrain.\nThe pose is on top of black vegetation. The pose is north of black vegetation. The pose is east of a dark-green box. The pose is south of a dark-green sidewalk. The pose is north of a black trash bin. The pose is east of a dark-green box.\nThe pose is on top of a gray road. The pose is north of a dark-green terrain. The pose is north of a green road. The pose is south of a beige sidewalk. The pose is south of green vegetation. The pose is north of gray vegetation.\nThe pose is north of a gray road. The pose is east of a gray pole. The pose is west of a dark-green pole. The pose is south of a dark-green pole. The pose is north of a gray road. The pose is east of a gray pole. " } ]
Hi, I am standing on the west of a green building, east of a
Text2Loc: 3D Point Cloud Localization from Natural Language
[ { "figure_caption": "Figure 2 .2Figure 2. The proposed Text2Loc architecture. It consists of two tandem modules: Global place recognition and Fine localization. Global place recognition.Given a text-based position description, we first identify a set of coarse candidate locations, \"submaps,\" potentially containing the target position. This is achieved by retrieving the top-k nearest submaps from a previously constructed database of submaps using our novel text-to-submap retrieval model. Fine localization. We then refine the center coordinates of the retrieved submaps via our designed matching-free position estimation module, which adjusts the target location to increase accuracy.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. (top) The architecture of global place recognition, (bottom) instance encoder architecture for point clouds, and the architecture of intra-and inter-text encoder. Note that the pre-trained T5 model is frozen.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. The proposed matching-free fine localization architecture. It consists of two parallel branches: one is extracting features from query text descriptions (top) and another is using the instance encoder to extract point cloud features (bottom). Cascaded cross-attention transformers (CCAT) use queries from one branch to look up information in the other branch, aiming to fuse the semantic information from point clouds into the text embedding. The result is then processed with a simple MLP to directly estimate the target position.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Qualitative localization results on the KITTI360Pose dataset: In global place recognition, the numbers in top3 retrieval submaps represent center distances between retrieved submaps and the ground truth. Green boxes indicate positive submaps containing the target location, while red boxes signify negative submaps. For fine localization, red and black dots represent the ground truth and predicted target locations, with the red number indicating the distance between them.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 66Figure 6. T-SNE visualization for the global place recognition.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Performance comparison on the KITTI360Pose benchmark[12].", "figure_data": "Localization Recall (ϵ < 5/10/15m) ↑MethodsValidation SetTest Setk = 1k = 5k = 10k = 1k = 5k = 10Text2Pos [12]0.14/0.25/0.31 0.36/0.55/0.61 0.48/0.68/0.74 0.13/0.21/0.25 0.33/0.48/0.52 0.43/0.61/0.65RET [33]0.19/0.30/0.37 0.44/0.62/0.67 0.52/0.72/0.78 0.16/0.25/0.29 0.35/0.51/0.56 0.46/0.65/0.71Text2Loc (Ours) 0.37/0.57/0.63 0.68/0.85/0.87 0.77/0.91/0.93 0.33/0.48/0.52 0.61/0.75/0.78 0.71/0.84/0.86Submap Retrieval Recall ↑MethodsValidation SetTest Set", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Performance comparison for gloabl place recognition on the KITTI360Pose benchmark[12]. Note that only values that are available in RET [33] are reported.", "figure_data": "[12]0.140.280.370.120.250.33RET [33]0.180.340.44---Text2Loc (Ours)0.320.560.670.280.490.58", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation study of the fine localization on the KITTI360Pose benchmark. * indicates the fine localization network from Text2Pose[12], and the submaps retrieved through our global place recognition. Text2Loc CCAT indicates the removal of only the PMC while retaining the CCAT in our network. Conversely, Text2Loc PMC keeps the PMC but replaces the CCAT with the text-instance matcher in Text2Pos.", "figure_data": "Text2Pos [12]0.140.360.480.130.330.43Text2Pos*0.330.650.750.300.580.67Text2Loc CCAT0.320.640.740.320.600.70Text2Loc PMC0.320.640.740.290.560.66Text2Loc (Ours)0.370.680.770.330.610.71MethodsParameters (M) Runtime (ms) Localization RecallText2Loc Matcher2.0843.110.30Text2Loc (Ours)1.062.270.33", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Computational cost requirement analysis of our fine localization network on the KITTI360Pose test dataset.", "figure_data": "ficiency. For a fair comparison, all methods are tested onthe KITTI360Pose test set with a single NVIDIA TITAN X(12G) GPU. Text2Loc takes 22.75 ms and 12.37 ms to ob-tain a global descriptor for a textual query and a submaprespectively, while Text2Pos [12] achieves it in 2.31 ms and11.87 ms. Text2Loc has more running time for the textquery due to the extra frozen T5 (21.18 ms) and HTM mod-ule (1.57 ms). Our text and 3D networks have 13.65 M(without", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Performance comparisons of changing one sentence in the queries on the KITTI360Pose test set.", "figure_data": "", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Table. 8 presents a performance comparison of Localization Recall (ϵ < 5m) ↑ Localization performance for Text2Loc with different numbers of CCAT on the KITTI360Pose benchmark. '0' means using a single Cross Attention Transformer (CAT) to fuse text and 3D point cloud features.", "figure_data": "Number of CCATValidation SetTest Setk = 1 k = 5 k = 10 k = 1 k = 5 k = 1000.280.570.660.260.510.6010.360.670.770.320.590.6920.370.680.770.330.610.7130.350.670.770.320.590.69Localization Recall (ϵ < 5m) ↑MethodsValidation SetTest Setk = 1 k = 5 k = 10 k = 1 k = 5 k = 10HCAT [37]0.350.660.750.320.590.68CCAT (Ours)0.370.680.770.330.610.71", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Performance comparison of different modules within our Text2Loc architecture on the KITTI360Pose benchmark.", "figure_data": "", "figure_id": "tab_9", "figure_label": "8", "figure_type": "table" } ]
Yan Xia; Letian Shi; Zifeng Ding; João F Henriques; Daniel Cremers
[ { "authors": "Panos Achlioptas; Ahmed Abdelreheem; Fei Xia; Mohamed Elhoseiny; Leonidas Guibas", "journal": "Springer", "ref_id": "b0", "title": "Referit3d: Neural listeners for fine-grained 3d object identification in real-world scenes", "year": "2020" }, { "authors": "Mikaela ; Angelina Uy; Gim Hee; Lee ", "journal": "", "ref_id": "b1", "title": "Pointnetvlad: Deep point cloud based retrieval for large-scale place recognition", "year": "2018" }, { "authors": "Tiago Barros; Luís Garrote; Ricardo Pereira; Cristiano Premebida; Urbano J Nunes", "journal": "Springer", "ref_id": "b2", "title": "Attdlnet: Attention-based deep network for 3d lidar place recognition", "year": "2022" }, { "authors": "Dave Zhenyu; Chen ; Angel X Chang; Matthias Nießner", "journal": "Springer", "ref_id": "b3", "title": "Scanrefer: 3d object localization in rgb-d scans using natural language", "year": "2020" }, { "authors": "Marco Cuturi", "journal": "", "ref_id": "b4", "title": "Sinkhorn distances: Lightspeed computation of optimal transport", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b5", "title": "", "year": "2013" }, { "authors": "Haowen Deng; Tolga Birdal; Slobodan Ilic", "journal": "", "ref_id": "b6", "title": "Ppfnet: Global context aware local features for robust 3d point matching", "year": "2018" }, { "authors": "Gil Elbaz; Tamar Avraham; Anath Fischer", "journal": "", "ref_id": "b7", "title": "3d point cloud registration for localization using a deep neural network auto-encoder", "year": "2017" }, { "authors": "Zhaoxin Fan; Zhenbo Song; Hongyan Liu; Zhiwu Lu; Jun He; Xiaoyong Du", "journal": "AAAI", "ref_id": "b8", "title": "Svt-net: Super light-weight sparse voxel transformer for large scale place recognition", "year": "2022" }, { "authors": "Mingtao Feng; Zhen Li; Qi Li; Liang Zhang; Xiangdong Zhang; Guangming Zhu; Hui Zhang; Yaonan Wang; Ajmal Mian", "journal": "", "ref_id": "b9", "title": "Free-form description guided 3d visual graph network for object grounding in point cloud", "year": "2021" }, { "authors": "Sepp Hochreiter; Jürgen Schmidhuber", "journal": "Neural computation", "ref_id": "b10", "title": "Long short-term memory", "year": "1997" }, { "authors": "Yihan Hu; Jiazhi Yang; Li Chen; Keyu Li; Chonghao Sima; Xizhou Zhu; Siqi Chai; Senyao Du; Tianwei Lin; Wenhai Wang; Lewei Lu; Xiaosong Jia; Qiang Liu; Jifeng Dai; Yu Qiao; Hongyang Li", "journal": "", "ref_id": "b11", "title": "Planning-oriented autonomous driving", "year": "2023" }, { "authors": "Manuel Kolmet; Qunjie Zhou; Aljoša Ošep; Laura Leal-Taixé", "journal": "", "ref_id": "b12", "title": "Text2pos: Text-to-point-cloud cross-modal localization", "year": "2022" }, { "authors": "Jacek Komorowski", "journal": "", "ref_id": "b13", "title": "Minkloc3d: Point cloud based largescale place recognition", "year": "2021" }, { "authors": "Tsung-Yi Lin; Piotr Dollár; Ross Girshick; Kaiming He; Bharath Hariharan; Serge Belongie", "journal": "", "ref_id": "b14", "title": "Feature pyramid networks for object detection", "year": "2017" }, { "authors": " David G Lowe", "journal": "International journal of computer vision", "ref_id": "b15", "title": "Distinctive image features from scaleinvariant keypoints", "year": "2004" }, { "authors": "Junyi Ma; Jun Zhang; Jintao Xu; Rui Ai; Weihao Gu; Xieyuanli Chen", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b16", "title": "Overlaptransformer: An efficient and yawangle-invariant transformer network for lidar-based place recognition", "year": "2022" }, { "authors": "Junyi Ma; Guangming Xiong; Jingyi Xu; Xieyuanli Chen", "journal": "", "ref_id": "b17", "title": "Cvtnet: A cross-view transformer network for place recognition using lidar data", "year": "2023" }, { "authors": "Zhixiang Min; Bingbing Zhuang; Samuel Schulter; Buyu Liu; Enrique Dunn; Manmohan Chandraker", "journal": "", "ref_id": "b18", "title": "Neurocs: Neural nocs supervision for monocular 3d object localization", "year": "2023" }, { "authors": "Mihir Prabhudesai; Hsiao-Yu Fish Tung; Syed Ashar Javed; Maximilian Sieb; Adam W Harley; Katerina Fragkiadaki", "journal": "", "ref_id": "b19", "title": "Embodied language grounding with implicit 3d visual feature representations", "year": "2019" }, { "authors": "Charles Ruizhongtai; Qi ; Li Yi; Hao Su; Leonidas J Guibas", "journal": "Advances in neural information processing systems", "ref_id": "b20", "title": "Pointnet++: Deep hierarchical feature learning on point sets in a metric space", "year": "2017" }, { "authors": "Filip Radenović; Giorgos Tolias; Ondřej Chum", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b21", "title": "Finetuning cnn image retrieval with no human annotation", "year": "2018" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b22", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research", "ref_id": "b23", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Ethan Rublee; Vincent Rabaud; Kurt Konolige; Gary Bradski", "journal": "Ieee", "ref_id": "b24", "title": "Orb: An efficient alternative to sift or surf", "year": "2011" }, { "authors": "Hasim Sak; Andrew W Senior; Franc; Beaufays", "journal": "", "ref_id": "b25", "title": "Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition", "year": "2014" }, { "authors": "Paul-Edouard Sarlin; Cesar Cadena; Roland Siegwart; Marcin Dymczyk", "journal": "", "ref_id": "b26", "title": "From coarse to fine: Robust hierarchical localization at large scale", "year": "2019" }, { "authors": "Paul-Edouard Sarlin; Daniel Detone; Tomasz Malisiewicz; Andrew Rabinovich", "journal": "", "ref_id": "b27", "title": "SuperGlue: Learning feature matching with graph neural networks", "year": "2020" }, { "authors": "Paul-Edouard Sarlin; Daniel Detone; Tsun-Yi Yang; Armen Avetisyan; Julian Straub; Tomasz Malisiewicz; Samuel Rota Bulò; Richard Newcombe; Peter Kontschieder; Vasileios Balntas", "journal": "", "ref_id": "b28", "title": "Orienternet: Visual localization in 2d public maps with neural matching", "year": "2023" }, { "authors": "Torsten Sattler; Bastian Leibe; Leif Kobbelt", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b29", "title": "Efficient & effective prioritized matching for large-scale image-based localization", "year": "2016" }, { "authors": "Aleksandr Segal; Dirk Haehnel; Sebastian Thrun", "journal": "", "ref_id": "b30", "title": "Generalized-icp", "year": "2009" }, { "authors": "Laurens Van Der Maaten; Geoffrey Hinton", "journal": "JMLR", "ref_id": "b31", "title": "Visualizing data using t-sne", "year": "2008" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b32", "title": "Attention is all you need", "year": "2017" }, { "authors": "Guangzhi Wang; Hehe Fan; Mohan Kankanhalli", "journal": "", "ref_id": "b33", "title": "Text to point cloud localization with relation-enhanced transformer", "year": "2007" }, { "authors": "Yan Xia", "journal": "", "ref_id": "b34", "title": "Perception of vehicles and place recognition in urban environment based on MLS point clouds", "year": "2023" }, { "authors": "Yan Xia; Yusheng Xu; Shuang Li; Rui Wang; Juan Du; Daniel Cremers; Uwe Stilla", "journal": "", "ref_id": "b35", "title": "Soe-net: A self-attention and orientation encoding network for point cloud based place recognition", "year": "2021" }, { "authors": "Yan Xia; Yusheng Xu; Cheng Wang; Uwe Stilla", "journal": "ISPRS Journal of Photogrammetry and Remote Sensing", "ref_id": "b36", "title": "Vpcnet: Completion of 3d vehicles from mls point clouds", "year": "2021" }, { "authors": "Yan Xia; Mariia Gladkova; Rui Wang; Qianyun Li; Uwe Stilla; João F Henriques; Daniel Cremers", "journal": "", "ref_id": "b37", "title": "Casspr: Cross attention single scan place recognition", "year": "2023" }, { "authors": "Yan Xia; Qiangqiang Wu; Wei Li; Antoni B Chan; Uwe Stilla", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b38", "title": "A lightweight and detector-free 3d single object tracker on point clouds", "year": "2023" }, { "authors": "Zhihao Yuan; Xu Yan; Yinghong Liao; Ruimao Zhang; Sheng Wang; Zhen Li; Shuguang Cui", "journal": "", "ref_id": "b39", "title": "Instancerefer: Cooperative holistic understanding for visual grounding on point clouds through instance multi-level contextual referring", "year": "2021" }, { "authors": "Wenxiao Zhang; Huajian Zhou; Zhen Dong; Qingan Yan; Chunxia Xiao", "journal": "IEEE Transactions on Visualization and Computer Graphics", "ref_id": "b40", "title": "Rank-pointretrieval: Reranking point cloud retrieval via a visually consistent registration evaluation", "year": "2022" }, { "authors": "Zhicheng Zhou; Cheng Zhao; Daniel Adolfsson; Songzhi Su; Yang Gao; Tom Duckett; Li Sun", "journal": "IEEE", "ref_id": "b41", "title": "Ndt-transformer: Large-scale 3d point cloud localisation using the normal distribution transform representation", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 308.86, 364.43, 243.29, 54.92 ], "formula_id": "formula_0", "formula_text": "M ref : min ϕ,F E (x,y,T )∼D (x, y) -ϕ T, argmin m∈Mref d (F (T ), F (m)) 2 (1)" }, { "formula_coordinates": [ 4, 60.34, 84.96, 218.02, 75.17 ], "formula_id": "formula_1", "formula_text": "𝑆!• 𝑇! 𝑆!• 𝑇\" 𝑆!• 𝑇# ⋯ 𝑆!• 𝑇$ 𝑆\"• 𝑇! 𝑆\"• 𝑇\" 𝑆\"• 𝑇# ⋯ 𝑆\"• 𝑇$ 𝑆#• 𝑇! 𝑆#• 𝑇\" 𝑆#• 𝑇# ⋯ 𝑆#• 𝑇$ ⋮ ⋮ ⋮ ⋱ ⋮ 𝑆$• 𝑇! 𝑆%• 𝑇\" 𝑆%• 𝑇# ⋯ 𝑆$• 𝑇$ 𝑇 ! 𝑇 \" … 𝑇 # 𝑇 $ ⋮ S ! S \" S # S $ … Instances in submaps" }, { "formula_coordinates": [ 5, 57.6, 445.43, 228.76, 12.14 ], "formula_id": "formula_2", "formula_text": "G i = {S j | sj -si ∞ < α, sj -c i ∞ < β },(2)" }, { "formula_coordinates": [ 5, 308.86, 88.72, 238.6, 36.53 ], "formula_id": "formula_3", "formula_text": "l(i, T, S) = -log exp(F T i • F S i /τ ) j∈N exp(F T i • F S j /τ ) -log exp(F S i • F T i /τ ) j∈N exp(F S i • F T j /τ ) ,(3)" }, { "formula_coordinates": [ 5, 367.73, 172.75, 177.38, 23.59 ], "formula_id": "formula_4", "formula_text": "L(T, S) = 1 N i∈N l(i, T, S) .(4)" }, { "formula_coordinates": [ 5, 357.29, 306.24, 187.82, 12.14 ], "formula_id": "formula_5", "formula_text": "L(C gt , C pred ) = C gt -C pred 2 ,(5)" }, { "formula_coordinates": [ 6, 121.35, 218.85, 156.13, 6.74 ], "formula_id": "formula_6", "formula_text": "k = 1 k = 3 k = 5 k = 1 k = 3 k = 5" }, { "formula_coordinates": [ 7, 65.85, 103.31, 213.33, 21.17 ], "formula_id": "formula_7", "formula_text": "k = 1 k = 3 k = 5 k = 1 k = 3 k = 5 w/o" }, { "formula_coordinates": [ 7, 327.33, 75.48, 211.1, 31.27 ], "formula_id": "formula_8", "formula_text": "Localization Recall (ϵ < 5m) ↑ Methods Validation Set Test Set k = 1 k = 5 k = 10 k = 1 k = 5 k = 10" } ]
2024-03-21
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b27", "b23", "b38", "b42", "b21", "b5", "b11", "b33", "b2", "b31", "b6", "b6", "b34" ], "table_ref": [], "text": "Creating 3D content from generative models has become a heated research topic in the past year, which is key to a variety of downstream applications, including game and film industries, autonomous driving simulation, and virtual reality. Specifically, DreamFusion [28] was proposed to optimize a neural radiance field (NeRF) [24] using a pretrained 2D text-to-image diffusion model and the score distillation sampling (SDS) technique, showing promising results for text-to-3D generation of arbitrary objects without any 3D data. However, the indirect 3D probability distribution modeling inevitably deteriorates the final generation quality. For example, it has been reported in DreamFusion and its follow-ups [6,15,39,43] that the overall generation success rate is low and the multi-face Janus problem exists.\nAnother line of work focuses on direct 3D generation by training on large-scale 3D data. For example, [22,26] apply the probabilistic diffusion model for point cloud generation and [12,34] model the denoise diffusion process on signed distance field (SDF). These methods usually apply a specific 3D representation and train the denoise diffusion on such representation using a specific 3D dataset, e.g., ShapeNet [3], and show high-quality generation results on objects similar to the training set. However, the scale of the current 3D dataset is still too small when compared with the text-image data [32]. Even with the largest 3D dataset [7] available, it is still challenging to train a 3D diffusion model for diverse text-to-3D generation.\nIn this work, we instead extend existing text-to-2D models to a denoising diffusion process on multi-view 2.5D depth/normal data. Compared with full 3D representations such as 3D point clouds or meshes, 1) 2.5D information such as depth or normal are much easier to capture or col-lect (e.g., depth provided by active sensors); 2) the depth and normal maps perfectly align with the image data, making it possible to adapt and fine-tune a 2.5D model from a pre-trained 2D RGB model. In order to construct full 3D models, 2.5D maps viewed from multiple perspectives are necessary. Therefore, the target diffusion model should be capable of generating multi-view images with content consistency. In practice, we fine-tune existing text-to-image diffusion models on multi-view 2.5D renderings from the Objaverse dataset [7]. On the one hand, the models are adapted to 2.5D information. On the other hand, joint multi-view distribution is captured with the help of structural modification of injecting multi-view information to the self-attention layers. During inference, multi-view images are generated synchronously by common schedulers like DDIM [35], which are then fused directly into a mesh by differentiable rasterization. The whole generation process completes in seconds, which is significantly faster than SDS-based methods that typically take 30 minutes. The system is extensively evaluated with complex text prompts and compared with both SDS-based and direct 3D generation methods, demonstrating the capability of generating 3D textured meshes with complex geometry, diversity, and high fidelity.\nTo summarize, major contributions of the paper include: • We propose to approach the 3D generation task by training a multi-view 2.5D diffusion model, which explicitly models the 3D geometry distribution while inheriting a strong generalization ability of the large-scale pretrained 2D image diffusion. • We introduce an efficient differentiable rasterization scheme to optimize a textured mesh directly from the multi-view normal maps and RGB images. • We carefully design a generation pipeline that achieves diverse, mode-seeking-free, and high-fidelity 3D content generation in only 10 seconds." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "3D Generation by Score Distillation", "publication_ref": [ "b27", "b38", "b15", "b42" ], "table_ref": [], "text": "Score Distillation [28,39] is one of the most popular method recently for 3D Generation by pre-trained 2D diffusion models. It distillates the knowledge of image denoising to the optimization process of differentiable rendering systems so that randomly rendered views are gradually refined to describe the input text prompt. There are fundamental problems: 1) 2D diffusion models are not 3D-aware, and the generated samples have multi-face problem as a result; 2) Each optimization step requires single forward of the denoising UNet, making the whole process time consuming; 3) High guidance scale of prompts is preferred for better convergence, which leads to over-saturation of appearance; 4) the optimization is mode-seeking, losing the strong di-versity of 2D diffusion model. Follow up works are proposed to solve some of them, but not all. Zero-1-to-3 [16] fine-tunes the 2D diffusion model with multi-view dataset to grant the ability of perspective control and mitigate the problem 1 in image-to-3D task. ProlificDreamer [43] mitigate problem 3 and 4 by utilizing a KL-divergence loss to perform sampling instead of mode-seeking, at the cost of higher time complexity. In this work, we do not apply score distillation and completely separate diffusion process and 3D model optimization. The diffusion can be scheduled and conditioned normally, so that the results have diversity and realistic color. And the 3D model optimization operates on explicit representation so can be finished quickly." }, { "figure_ref": [], "heading": "Direct 3D Diffusion", "publication_ref": [ "b21", "b5", "b4", "b11", "b33", "b2", "b6" ], "table_ref": [], "text": "Fast 3D generation can be achieved by training a direct 3D diffusion model with 3D dataset. One key problem is to choose the 3D representation and design a special encoder/decoder for it. There are some early attempts to train direct 3D models for point cloud [22,26,46,49], mesh [18] and implicit representation like NeRF or SDF [5,10,12,34]. However, they are trained on the limited datasets like ShapeNet [3] which have rather small data size, geometry complexity or category diversity. Recent 3D datasets such as Objaverse [7] dramatically improve the state-of-the-art of 3D dataset, but is still limited compared to 2D image-caption datasets for training 2D diffusion models. In this work, we still use 2D neural network to deal with 2.5D maps, and thus we can perform fine-tuning on existing 2D diffusion models so as to inherit their strong generalization." }, { "figure_ref": [], "heading": "Multi-view Diffusion", "publication_ref": [ "b16", "b6", "b16", "b18", "b40" ], "table_ref": [], "text": "Generating multi-view images simultaneously is another strategy to bring 3D-awareness to 2D diffusion models.\nTwo key modifications are proposed to achieve this: 1) Information from other views are concatenated with the current view as keys and queries in the self-attention layers. The gathered information can be from the single projection [36], epipolar lines [17,37] or all the pixels [33];\n2) The model is fine-tuned on multi-view renderings from 3D dataset like Objaverse [7]. To construct 3D models, previous works either use SDS [33], which is still timeconsuming, or image-based reconstruction systems like NeuS [17,19,41], which requires at least 10 views to produce reasonable reconstructions. Similar to JointNet [48] which explores the 2.5D domain, we choose to generate multi-view 2.5D maps like normal, so that we can use SDSfree reconstruction while still keep small view numbers." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce our multi-view 2.5D diffusion system, which synchronously generates multi-view 2.5D 2~3 sec.\n~1.5 sec." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Normal Rendering", "publication_ref": [], "table_ref": [], "text": "Figure 1. Overview of our text-to-3D content generation system. The generation is a two-stage process, first generating geoemtry and then appearance. Specifically, the system is composed of the following steps: 1) a single denoising process to simultaneously generate 4 normal maps; 2) fast mesh optimization by differentiable rasterization; 3) a single denoising process to generate 4 images conditioned on rendered normal maps; 4) texture construction from multi-view images. The whole generation process only takes 10 seconds.\ngeometry images, i.e., normal maps, and corresponding texture maps given a text prompt as input for 3D content generation (Fig. 1). Our method is efficient enough to generate various results in only 10 seconds. In Sec. 3.1, we first briefly review the 2D diffusion model and formulate the multi-view 2.5D adaptation. We then illustrate the crossview attention which enhances the multi-view consistency in Sec. 3.2. In Sec. 3.3, we describe how to produce the final 3D model from generated 2.5D geometry images, and finally in Sec. 3.4, we demonstrate how to synthesize the texture maps given the generated normal maps, and construct the high-quality final textured triangle mesh." }, { "figure_ref": [], "heading": "Diffusion Models and 2.5D Adaptation", "publication_ref": [], "table_ref": [], "text": "Diffusion models learn a conversion from an isotropic Gaussian distribution to the target distribution (e.g. image spaces) via iterative denoising operations. We build our system on latent diffusion models (LDM), which contains a variational autoencoder (VAE) including an encoder and a decoder, a denoising network, and a condition input encoder. Compared to original diffusion models, LDM conducts the whole diffusion process in the latent image space and greatly improves efficiency and quality. Specifically, during the forward process, a noisy latent at time t is sampled in the latent space and is gradually degraded by noise which makes it indistinguishable from the Gaussian noise, while the denoising process reverses the process, which iter-atively predicts and remove the noise to get the real images.\nIn this work, we extend 2D text-to-image diffusion models to generate multi-view geometry images. By fine-tuning a pre-trained 2D diffusion model using our 2.5D image dataset, we are able to inherit the generalization and also obtain the expressive generation ability for multi-view 2.5D geometry images. Let (X , c) be 3D data with caption from training dataset, x i ∈ X be multi-view renderings, x i,t be views corrupted by independent noise ϵ i ∈ E at time t. The denoising neural network ϵ θ is trained by\nL = E (X ,c);E∼N (0,1);t xi∈X ;ϵi∈E ∥ϵ i -ϵ θ (x i,t , c, t)∥ 2 2 .\n(1)" }, { "figure_ref": [], "heading": "Cross-view Attention", "publication_ref": [], "table_ref": [], "text": "Before fine-tuning, the multiple images generated from the base model for the same text prompt are not guaranteed to describe the same object because they are initiated from different noise maps and are denoised independently. We use a solution similar to [33]: we add data communication among the diffusion processes and fine-tune the model on multi-view image dataset to learn multi-view conditioning. Implementation-wise, we synchronize all the diffusion processes. When the calculation reaches a self-attention layer, we gather all the intermediate results as queries and values instead of just using the results from the current branch. Because images are treated as sequential inputs, the additional information can be simply concatenated together without introducing more trainable parameters. This architecture ensures that the diffusion processes are mutually conditioned, which serves as a structural prerequisite for multiview consistent generation." }, { "figure_ref": [], "heading": "Explicit Multi-view 2.5D Fusion", "publication_ref": [ "b43", "b46", "b0", "b7", "b23", "b24", "b12", "b19", "b13", "b26" ], "table_ref": [], "text": "There are various approaches available for constructing a 3D model from multi-view observations. Among them, image-based 3D reconstruction methods such as multi-view stereo [9, 44,45,47] or NeRF [1,8,24,25] requires at least 10 images for high-fidelity reconstruction, which pose significant computational challenges in multi-view diffusion scenarios. However, by taking benefits from 2.5D information, one could effectively reduce this requirement. In practice, we generate 4 normal maps aligned with world coordinates from different viewpoints (front, left, right, and back). To fuse these observations into a triangle mesh, we explore the insight of geometry optimization from an initialized mesh via differentiable rasterization. This optimization, which is independent of neural network inference, achieves convergence rapidly within seconds (see Alg. 1). Space Carving Initialization. A simplistic and straightforward approach would be to initialize the shape using basic geometric primitives like spheres and cubes and optimize. However, this often introduces significant challenges during the latter geometry optimization, particularly when the target shape's topology diverges significantly from these elementary forms. To tackle this challenge, we employ the space carving algorithm [13] for shape topology initialization. Besides, it also provides a good initialization for latter geometry optimization. Fig. 2 (a) shows the space carving results. Specifically, this process begins by segregating the background normal maps through a simple value thresholding. Subsequently, a volume in the interested space is created, and each voxel is projected onto the images using the camera parameters, determining whether the corresponding pixel is part of the object or the background. By gathering all projections under different views, we construct an occupancy volume, in which a voxel's occupancy is set to 0 (indicating emptiness) if all of its projections belong to the background, and 1 (indicating occupancy) otherwise. Finally, we apply the marching cubes [20] on the occupancy volume to extract the zero level-set surface to form the initialized shape. This technique not only effectively preserves the topology, but also provides a rough shape estimation generated from the multi-view normal images. Optimization via Differentiable Rasterization. Once we have obtained the initialized geometry, we further refine the mesh details based on observational data. This refinement is mathematically formulated as an optimization problem, targeting the triangle triangle vertices V and faces F . As illustrated in Alg. 1 and Fig. 2, we first simply the marching cube-generated mesh to a lower face number, which is found to help accelerate and improve the optimization. In each optimization step, we optimize the model by minimiz- ing the L 1 loss between the rendered results and observations, as well as a normal consistency regularization. The loss function could be written as follows:\nL V = L n + λ α L α + λ nc L nc ,(2)\nwhere\nL n = 1 4 4 i ||n i -ni || 1\nis the normal rendering loss. It measures the mean L 1 distance between rendered normal maps n and the observations n under different camera viewpoints i ∈ {0, 1, 2, 3}. Similarly, L α = 1 4 4 i ||α i -αi || 1 is the alpha mask loss, which computes the difference between rasterized object mask α and the observed α, and the latter could be obtained by a simple value thresholding δ = 0.05 in the generated normal maps.\nWe additionally integrate a normal consistency term, denoted as L nc to regularize the mesh. Specifically, this regularization is designed to smooth the mesh on a global scale by minimizing the negative cosine similarity between connected face normals. The hyperparameters λ α , λ nc which control the different weights for alpha mask loss and normal consistency regularization are set to 1 and 0.1 respectively. We adopt the nvdiffrast library [14] for differentiable rasterization.\nAfter each optimization step, we further perform remeshing by merging or splitting triangle faces using the strategy from [27]. During experiments, we empirically found that only about 200 optimization steps are enough to generate a high-quality geometry mesh, which takes only around 2 to 3 seconds. As shown in the fig. 2 (c-e), the dog shape has been well optimized at around 200 steps." }, { "figure_ref": [ "fig_6", "fig_1" ], "heading": "Texture Synthesis", "publication_ref": [], "table_ref": [], "text": "Texturing the mesh is another crucial step in achieving a high-quality result. Similar to the geometry generation, we initially synthesized multi-view texture maps, which were then applied to the generated geometry. In practice, another multi-view diffusion model generates the corresponding multi-view texture maps, conditioned on text prompts and the multi-view normal images. MVDream can generate realistic geometry and appearance with fine details but has limited diversity (Fig. 5). In contrast, our system can generate realistic 3D models efficiently. Input prompts: 1) a zoomed out DSLR photo of a wizard raccoon casting a spell, 2) a DSLR photo of a turtle standing on its hind legs, wearing a top hat and holding a cane, 3) a DSLR photo of a pirate collie dog, high resolution, and 4) a DSLR photo of a robot tiger.\nAs shown in figure 1, the architecture of the multi-view normal-conditioned diffusion model is similar to the textto-normal model, except that we extend the first convolution layer by increasing the number of channels to satisfy the normal latent condition input. Specifically, we initialize the extra trainable parameters in the first layer to zero before training. The normal condition plays a pivotal role in shape information and guides the model to generate both text-and shape-aligned texture images. We further apply super-resolution, i.e., Real-ESRGAN [42] on the generated texture maps to increase more appearance details, resulting in a 4 × resolution upscale from 256 × 256 to 1024 × 1024.\nAfter obtaining the high-resolution RGB images, the final stage is to project these images to the shape geometry and generate a global texture. We perform UV parameterization and the Poisson blending algorithm [38] to alleviate multi-view inconsistency.\nIterative updating. In most cases, a single run of the pipeline is enough to generate high-quality results. However, since we generate 4-view information at once, there may be some areas unobserved in the generated RGB im-ages (such as the top area of the object), and a texture refinement is required. To address this issue, we could iteratively update the generated images by using popular inpainting [21] pipelines in diffusion models to refine the generated textures. By computing a visibility mask at a new camera viewpoint, the invisible areas could be generated given a certain noise strength. During experiments, we found that only 1 or 2 iterations are enough to inpaint the unseen areas." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [], "table_ref": [], "text": "In the following, we describe the aspects relevant to our system implementation details: dataset preparation in Sec. 1.1 and training setups in Sec. 4.2." }, { "figure_ref": [], "heading": "Dataset Preparation", "publication_ref": [ "b6", "b22", "b28", "b30" ], "table_ref": [], "text": "We use the Objaverse [7] dataset for 2.5D training data generation, which is a large-scale 3D object dataset containing 800K high-quality models. We use the captions provided by cap3d [23] as text prompts. We filter the dataset by sorting the CLIP scores and selecting the top 500K objects with high text-image consistency. Each object is firstly normal-ized at the center, and we render the scene from 32 viewpoints uniformly distributed in azimuth angles.\nBesides, we also adopt a large-scale 2D image-text dataset to improve the generation diversity. Specifically, we use the COYO-700M dataset [2], which also contains metadata like resolution and CLIP scores [29]. We filter the dataset with both width and height greater than 512, aesthetic scores [31] greater than 5, and watermark scores lower than 0.5, which results in a 65M-size subset. Though the filtered dataset is reduced to 1/10 of the original size, it is still larger than the 3D dataset. Actually, we do not use the whole filtered dataset during training.\nPlease check the supplementary for more details." }, { "figure_ref": [], "heading": "Training Setup", "publication_ref": [], "table_ref": [], "text": "As introduced above, we train the model with both 2.5D rendered images and natural images, with a probability of 80% to select the former. This makes the instances seen in each batch nearly equal for two kinds of data. We use the " }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In the following, we represent the experiment results of our approach and evaluate the design of our system, including qualitative comparisons against state-of-the-art techniques and quantitative evaluations of model performances." }, { "figure_ref": [ "fig_3" ], "heading": "Text-to-3D contents generation", "publication_ref": [], "table_ref": [], "text": "Given a random input text prompt, the proposed system is able to generate a high-fidelity 3D triangle mesh. Fig. 3 shows a gallery of our generation results. Generated multiview normal and RGB images are also presented beside the 3D mesh. Our multi-view normal diffusion model is able to generate high-quality normal maps with expressive geometry details, and the normal-conditioned RGB diffusion model also generates detailed textures aligned with input normal maps, which validates the effectiveness of our crossview attention design. All prompts used are unseen during training, which proves the generalization ability. " }, { "figure_ref": [ "fig_4", "fig_6" ], "heading": "Qualitative and Quantitative Evaluation", "publication_ref": [ "b27", "b5", "b11", "b29", "b28", "b29", "b28", "b6" ], "table_ref": [ "tab_3", "tab_2", "tab_2" ], "text": "Qualitative evaluation. In this section we compare our method with SDS-based methods including DreamFusion [28], Fantasia3D [6], and MVDream [33]. We also compare with the direct 3D generation methods including Point-E [26] and Shap-E [12]. The text prompts are provided from DreamFusion, which were unseen during the finetuning for MVDream and ours. Fig. 4 illustrates qualitative comparisons of the renderings. It is clearly found that Point-E and Shap-E fail to generate reasonable textaligned results. These direct 3D-based generation methods were trained on the relatively small 3D dataset compared to large-scale 2D text-image datasets, leading to poor generalization ability. Besides, DreamFusion and Fantasia3D suffer from the multi-face problem, while the results from the latter contain more details because of the supervision on geometry only. The rest two methods are 3D-aware so are able to produce reasonable 3D topology. MVDream generally achieves better visual quality, while our results are more consistent with the text prompts and take much less time to generate (35 mins v.s. 10s). Sample diversity. Here, we compare the diversity of generated samples with MVDream. In this experiment, We generate 10 samples with the same prompt but different seeds. Fig. 5 1. We evaluate the proposed two multi-view diffusion models by computing. FID [11] (lower is better), IS [30] (higher is better), and CLIP scores [29] (higher is better) are used to measure the performance of different model variants.\nboth multi-view diffusion models are regularized by largescale image-caption datasets to prevent overfitting on the 3D dataset, the results from MVDream still collapse to a single type because of the mode-seeking nature of SDS. On the contrary, our method can still keep the content diversity of the pre-trained diffusion model because the construction of 3D models is independent of the diffusion process, which would faithfully follow the random denoising process. Quantitative evaluation. In the following, we quantitatively evaluate image generation quality and the text-image consistency of the proposed two novel multi-view diffusion models. Table 2 demonstrates the evaluation results. Specifically, Frechet Inception Distance (FID) [11] and Inception Score (IS) [30] are adopted to measure the generation image quality and CLIP score cosine similarity [29] is calculated to measure the text-image consistency. We randomly select 2000 subjects as well as their multi-view RGB and normal renderings in the Objaverse [7] dataset as our evaluation database. FID and IS are calculated independently of viewpoints while the CLIP similarity is selected as the max value across all 4-view scores.\nIn general, we find that the proposed model achieves similar or even better results compared to the groundtruth renderings, which proves the high image quality and imagetext consistency. We also evaluate the training strategies used in multi-view normal diffusion training, including using 2D large-scale dataset joint training, using higher consistency but fewer 3D subjects for training. It is clearly shown that the performance drastically drops when training without a 2D wild dataset injection. We believe that this is because fine-tuning purely multi-view normal data, would lead to a catastrophic forgetting of the original learned distribution and leads to poor learning ability. Training using fewer but higher text-consistent data leads to better IS and CLIP similarities, but worse FID. In practice, we found this model has lower generalization ability and diversity compared to the model that used more 3D data.\nWe also compare to previous SOTA methods quantitatively in Table 2. We randomly selected 50 prompts from Dreamfusion, not seen during our method and MVDream's fine-tuning, as the evaluation set. We adopt IS, CLIP scores and FID (Objaverse rendering and COCO validation set) to evaluate rendering results. Running time is also preseneted. Our method outperforms direct 3D diffusion methods significantly across all metrics and is on par with state-of-theart SDS-based methods. Our method achieves slightly better CLIP scores and FID but worse IS compared to MV-Dream, and consumes significantly less time for generation.\nPlease check the supplementary for more evaluations." }, { "figure_ref": [], "heading": "Limitations and Future Work", "publication_ref": [ "b3" ], "table_ref": [], "text": "Limited view numbers. Due to the small view number, areas such as top, bottom and concavity cannot be fully observed, and thus their geometry or appearance cannot be well reconstructed. Apart from the iterative update scheme, the multi-view diffusion can be extended to more views. Texture quality. For the appearance, we finetune a multiview normal-conditioned diffusion model for efficiency. However, the ability of generating realistic images is degraded due to the texture quality of the 3D training samples and their rendering quality. Apart from further enhancing the training samples, we can also apply the state-of-the-art texture generation systems [4] for non-time-sensitive tasks.\nPlease check the supplementary for more discussions." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We propose to perform fast text-to-3D generation by finetuning a multi-view 2.5D diffusion from pre-trained RGB diffusion models. To learn multi-view consistency, the model is fine-tuned on multi-view normal map renderings, with cross-view attention as the structural guarantee. After the simultaneous generation of multi-view normal maps, 3D models are obtained by deforming meshes by differentiable rasterization. Finally, appearance is generated by multi-view normal-conditioned RGB diffusion. Our generation pipeline produces diverse and high-quality 3D models in 10 seconds, and demonstrates strong generalization to complex content and generates fine details. Extensive experiments are conducted to show that our method enables fast generation of realistic, complex, and diverse models." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This work is supported by National Natural Science Foundation of China under Grants 62001213 and Hong Kong RGC GRF 16206722.\n--Supplementary Material --Due to the space limitation of the main paper, we provide supplementary materials to give an auxiliary demonstration. In this PDF file, we will present a detailed description of the implementation details, additional evaluation and discussions, and more results. We also provide a project page to present video results for better visualization. Project page: https://nju-3dv.github.io/projects/direct25." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [], "table_ref": [], "text": "In this section, we describe more implementation details of the proposed system, including data preparation, iterative updating, inference time, and another texturing implementation." }, { "figure_ref": [], "heading": "Dataset Preparation", "publication_ref": [ "b6", "b22", "b28", "b30", "b6" ], "table_ref": [], "text": "We use the Objaverse [7] dataset for 2.5D training data generation, which is a large-scale 3D object dataset containing 800K high-quality models. We use the captions provided by Cap3d [23] as text prompts, which is the best 3D dataset caption method currently. Each object is firstly normalized at the center within a bounding box [-1, 1] 3 , and we render the scene from 32 viewpoints uniformly distributed in azimuth angles between [0 • , 360 • ]. The elevation is set to 0 • and camera FoV is set to 60 • . The camera distance from the origin (0, 0, 0) is set to a fixed distance equal to 1.5 times the focal length in normalized device coordinates. For lighting, we use a composition of random lighting selected from point lighting, sun lighting, spot lighting, and area lighting. RGB images and normal maps in world coordinates are rendered using a rasterizer-based renderer for each object.\nBesides, we also adopt a large-scale 2D image-text dataset to improve the generation diversity following mvdream [33]. Specifically, we use the COYO-700M dataset [2], which also contains metadata like resolution and CLIP scores [29], etc. We filter the dataset with both width and height greater than 512, aesthetic scores [31] greater than 5, and watermark scores lower than 0.5, which results in a 65M-size subset. Though the filtered dataset is reduced to 1/10 of the original size, it is still much larger than the 3D dataset. Actually, we do not consume the whole dataset within the designated training time. In the following, we describe the specific dataset usage for two proposed multiview diffusion model training. Text-to-normal multi-view diffusion model. As we want to generate high-quality and multi-view consistent normal maps from a single text prompt input, we are able to use all valid normal map renderings in Objaverse [7]. We filter the dataset by sorting the CLIP similarities between RGB images and captions and selecting the top 500K objects to keep a high text-image consistency. We take a similar 2D & 3D joint training strategy with MVDream [33], where 3D data and 2D data are randomly chosen in each batch with a probability of 80% and 20%, respectively. This trick can guarantee the same expected number of instances to be seen in each training step because 4 views are from the same object for 3D dataset. Also for 3D data, we add a special tag normal map to the end of captions to indicate the normal map prediction task. During inference, we also add this postfix to the prompt for normal map predictions. Normal conditioned RGB multi-view diffusion model. Some samples in the Objaverse dataset has cartoonish appearance, and we would like to filter out these samples. Specifically, we first filter the dataset to obtain renderings whose aesthetic scores are larger than 5, which results in a 130K subset. Then, we compute the CLIP scores between the remaining images and two pre-defined positive and negative quality description prompts 1 . We compute the ratio of the positive scores and negative scores and select the top 10K data as our training dataset. We found that this strategy successfully selected the high-quality renderings in the dataset, and works better than training on all rendering data." }, { "figure_ref": [], "heading": "Iterative Updating", "publication_ref": [], "table_ref": [], "text": "In most cases, a single run of the pipeline is enough to generate high-quality results. However, for some topologies, there may be large areas unobserved by the 4 perspectives (e.g., large planar areas on the top of the object). To address this issue, we could iteratively update rendered images from novel views by the inpainting [21] pipeline to refine the texture. Specifically, we compute an inpainting mask indicating the unseen areas at a new camera viewpoint, and the invisible areas are edited given a certain noise strength. In Fig. 2, we present the results of the iterative updating. In this example, we inpaint the top views of the generated bread and fuse the resulted RGB images back to the generated model. As shown in the figure, the top areas of the bread are unseen during the first generation, and we inpaint the unseen areas in the second run. The inpainting mask is used to ensure that only the unseen areas would be modified, while other regions are kept unchanged. The final generated model (Fig 2 (e)) demonstrates the effectiveness of the strategy. During experiments, we found that 1 or 2 iterations suffice to recover the unseen areas. 1 Positive prompt: realistic, 4K, vivid, highly detailed, high resolution, high quality, photography, HD, HQ, full color; Negative prompt: cartoon, flat color, simple texture, ugly, dark, bad anatomy, blurry, pixelated obscure, unnatural colors, poor lighting, dull, unclear, cropped, lowres, low quality, artifacts, duplicate, morbid, mutilated, poorly drawn face, deformed, dehydrated, bad proportions " }, { "figure_ref": [], "heading": "Inference Time", "publication_ref": [], "table_ref": [], "text": "Compared to SDS optimization-based methods which typically take over half an hour, our method is efficient enough to generate high-quality results in 10 seconds: On a single Nvidia A100 GPU, the denoising process of the two multiview diffusion models each takes around 2.5 seconds for 50 DDIM steps. The explicit geometry optimization takes around 2 ∼ 3 seconds for 200 optimization steps, which depends on the triangle mesh complexity. The final texture fusion takes around 1.5 seconds. the efficiency and diversity of the proposed system enable selection from batch generated samples, which greatly increases the practicality for prototyping and digital content creation. For iterative updating, typically 1-3 passes are enough to paint the unseen areas and can be finished in less than one minute, which is still much faster than the previous SDS optimization-based methods." }, { "figure_ref": [], "heading": "Alternative Texturing Implementation", "publication_ref": [], "table_ref": [], "text": "Besides the mentioned texturing method in the main paper, we also propose an alternative optimization-based texturing method as our open-source version. Similar to geometry optimization, we optimize the texture map in UV space by minimizing the reconstruction loss of the multi-view RGB images. Specifically, we adopt the L 1 RGB loss, SSIM loss, and a total variation (TV) loss on the UV texture map as a regularization. The weights for these three losses are set to 1.0, 10.0, and 1.0. In experiments, we found that only 50 -100 steps are enough to generate satisfactory results, and the optimization takes only about 1 second." }, { "figure_ref": [], "heading": "Geometry-Appearance Disentangled Generation", "publication_ref": [], "table_ref": [], "text": "Due to the two-stage setting in the proposed method, one could generate random RGB images while keeping the geometry fixed, which enables geometry-appearance disentangled generation and offers better control over the gen-eration process. Fig. 2 demonstrates the disentangled generation results. It demonstrates that users can fix the satisfying generated geometry and then proceed to appearance generation." }, { "figure_ref": [ "fig_1" ], "heading": "More Evaluations", "publication_ref": [], "table_ref": [], "text": "Here we present more evaluations of the proposed fast meshing algorithm. Specifically, We present normal consistency and Chamfer Distance (×10 -3 ) evaluation w.r.t. optimization steps (Tab. 1) of the fast meshing on 15 chosen Objaverse meshes with ground truth normal maps. In most cases, we found no obvious improvements beyond 200 steps, as also shown in Fig. 1.\nThe number of views for reconstruction is constrained by the diffusion model: SD2.1 512x512 resolution aligns with four 256x256 views. Our early experiments suggested optimal quality when the total pixel count aligns with the base model's resolution. So we also do this ablation study by feeding ground truth to the optimization like the previous one. We find that more views lead to slightly better reconstruction, and leave this for future work." }, { "figure_ref": [], "heading": "Discussions", "publication_ref": [ "b39" ], "table_ref": [], "text": "In the following, we provide a detailed discussion about the settings of our system, including the two-stage sequential models, and normal predictions v.s. depth predictions. Two-stage sequential architecture. As demonstrated in Sec. 2, a two-stage sequential architecture naturally enables the geometry-appearance disentangled generation and provides more freedom on both geometry and appearance generation. Besides, using a combined pipeline also leads to a double GPU memory requirement compared to the sequential setting, which could become a great burden under the multi-view setting. This challenge becomes much more severe when one increases the spatial resolution of the diffusion model, e.g. from 256 to 512 or even 1024. Finally, the sequential model has better multi-view and geometryappearance consistency. Instead of the generation normal maps, we use the ones rendered from the optimized mesh for the texture diffusion model input. On the one hand, the rendered normal maps are guaranteed to be consistent. On the other hand, it provides better alignment between the generated RGB images and the actual geometry. For the above reasons, our system takes the two-stage sequential as our architecture. Normal v.s. Depth. Another alternative choice for our system is to use depth instead of normal. Because normal is the first-order derivative of the depth, it is free from scale ambiguity and provides a higher tolerance for multi-view inconsistency. Optimizing depth value directly requires much higher multi-view accuracy and therefore decreases the robustness of the geometry optimization system. Previous work [40] also found that using normal priors performs better than the depth priors, which also supports our assumption. Secondly, normal serves as a better conditioning signal for RGB generation because it generally has better alignment than depth. For example, sharp normal changes result in RGB discontinuity because of shading, but in this case depth may still be smooth. Therefore, we adopted normal as our shape representations and found it worked well." }, { "figure_ref": [ "fig_4", "fig_4" ], "heading": "Limitations and Failure Cases", "publication_ref": [ "b3" ], "table_ref": [], "text": "In the main paper, we briefly discuss the limitations of the proposed pipeline and here we present more discussions. Multi-view consistency. The multi-view RGB/normals are generated by the self-attention mechanism in the multi-view diffusion models without any physical-based supervision, which means the multi-view consistency is not guaranteed. This is an inherent issue in multiview diffusion models like MVDream, which is known to be prone to geometric misalignment. Our first stage text-to-normal diffusion model also suffers from this issue, while we found that the issue on the normal model is smaller than that on the RGB model. Besides, the second stage adopts multiview-consistent rendered normals as input, which relieves challenges faced by one-stage models like MVDream. We will include this part in future work. Limited view numbers and failure cases. Because the number of views is small, areas such as top, bottom, and concavity cannot be fully observed, and thus their geometry or appearance cannot be well reconstructed. Apart from the iterative update scheme, the multi-view diffusion model can be further extended to handle more views.\nWe also emphasize that the limited view of normal maps may not provide sufficient information for reconstruction, leading to a degraded performance as shown in Tab. 1. We also present an example in Fig. 4. This is due to the intrinsic issue brought about by the normals being secondorder derivatives of world positions, which introduces ambiguities in shape, i.e., we only know its direction in the world coordinate system, but not its specific depth, because the normal maps are the same for any depth. As shown in Fig. 4, the television screen failed to reconstruct given only 4 views normals. The full screen overall protrudes, but under the training viewpoints, there is no difference in normals -they are pointing in the same direction. This is an inherent issue with reconstruction based on normals, and using more viewpoints can greatly alleviate this problem. Our reconstruction system also suffers from this issue, and more views could lead to more accurate reconstruction. Texture quality. For the appearance, we finetune a multiview normal-conditioned diffusion model for efficiency. However, the ability to generate realistic images is degraded due to the texture quality of the 3D training samples and their rendering quality. Apart from further enhancing the training samples, we can also apply the state-of-the-art texture generation systems [4] for non-time-sensitive tasks." }, { "figure_ref": [ "fig_13" ], "heading": "More Results", "publication_ref": [], "table_ref": [], "text": "We present more results of the proposed method on the following pages, including the various generation ability (Fig 5 ) and more generation results (Fig. 6, 7, 8)." }, { "figure_ref": [], "heading": "Additional Video Results", "publication_ref": [], "table_ref": [], "text": "We present video results of the proposed method in our project page: https://nju-3dv.github.io/projects/direct25.\nPlease check it for better visualization. Given text prompts as description input, our method outputs high-quality textured triangle mesh in only 10 seconds. The generated multi-view normal and RGB images are shown beside the rendered models. Prompts for the above left column results are R1) a baby bunny sitting on top of a stack of pancakes, R2) a beautiful rainbow fish, R3) a DSLR photo of an astronaut standing on the surface of mars, R4) a steam engine train, high resolution, R5) a DSLR photo of a delicious croissant, and R6) a beautiful dress made out of garbage bags, on a mannequin. Studio lighting, high quality, high resolution. Prompts for the above right column results are R1) a bald eagle carved out of wood, R2) a DSLR photo of a robot tiger, R3) a DSLR photo of a teal moped, R4) a turtle standing on its hind legs, wearing a top hat and holding a cane, R5) a zoomed out DSLR photo of a marble bust of a fox head, and R6) a DSLR photo of a corgi puppy. Figure 7. More generation results. Prompts for the above results from top to bottom and left to right are R1-l) a beagle in a detective's outfit, R1-2) a blue jay standing on a large basket of rainbow macarons, R1-3) a blue motorcycle, R2-1) a dragon-cat hybrid, R2-2) a DSLR photo of a bald eagle, R2-3) a DSLR photo of a bulldozer, R3-1) a DSLR photo of a hippo wearing a sweater, R3-2) a DSLR photo of a pair of tan cowboy boots, studio lighting, product photography, R3-3) a DSLR photo of a plate piled high with chocolate chip cookies, R4-1) a DSLR photo of a porcelain dragon, R4-2) a DSLR photo of a puffin standing on a rock, R4-3) a DSLR photo of a red-eyed tree frog, R5-1) a DSLR photo of a squirrel wearing a leather jacket, R5-2) a DSLR photo of a tarantula, highly detailed, R5-3) a DSLR photo of a toy robot, R6-1) a DSLR photo of an ice cream sundae, R6-2) an old vintage car, R6-3) a DSLR photo of an ornate silver gravy boat sitting on a patterned tablecloth, R7-1) a frazer nash super sport car, R7-2) a freshly baked loaf of sourdough bread on a cutting board and R7-3) a highland cow. Figure 8. More generation results. Prompts for the above results from top to bottom and left to right are R1-l) a lionfish, R1-2) a marble bust of a mouse, R1-3) a metal sculpture of a lion's head, highly detailed, R2-1) a pig wearing a backpack, R2-2) a rabbit, animated movie character, high detail 3d model, R2-3) a ripe strawberry, R3-1) a snail on a leaf, R3-2) a squirrel dressed like Henry VIII king of England, R3-3) a wide angle DSLR photo of a colorful rooster, R4-1) a zoomed out DSLR photo of a beautiful suit made out of moss, on a mannequin. Studio lighting, high quality, high resolution, R4-2) a zoomed out DSLR photo of a fresh cinnamon roll covered in glaze, R4-3) a zoomed out DSLR photo of a model of a house in Tudor style, R5-1) a cute steampunk elephant, R5-2) a DSLR photo of a delicious croissant, R5-3) a DSLR photo of a plush t-rex dinosaur toy, studio lighting, high resolution, R6-1) a DSLR photo of an elephant skull, R6-2) a flower made out of metal, R6-3) a hotdog in a tutu skirt, R7-1) a shiny red stand mixer, R7-2) a wide angle zoomed out view of Tower Bridge made out of gingerbread and candy and R7-3) a zoomed out DSLR photo of a wizard raccoon casting a spell." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "https://nju-3dv.github.io/" }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "projects/direct25. * This project was performed during Yuanxun Lu's internship at Apple." } ]
Recent advances in generative AI have unveiled significant potential for the creation of 3D content. However, current methods either apply a pre-trained 2D diffusion model with the time-consuming score distillation sampling (SDS), or a direct 3D diffusion model trained on limited 3D data losing generation diversity. In this work, we approach the problem by employing a multi-view 2.5D diffusion fine-tuned from a pre-trained 2D diffusion model. The multi-view 2.5D diffusion directly models the structural distribution of 3D data, while still maintaining the strong generalization ability of the original 2D diffusion model, filling the gap between 2D diffusion-based and direct 3D diffusion-based methods for 3D content generation. During inference, multi-view normal maps are generated using the 2.5D diffusion, and a novel differentiable rasterization scheme is introduced to fuse the almost consistent multiview normal maps into a consistent 3D model. We further design a normal-conditioned multi-view image generation module for fast appearance generation given the 3D geometry. Our method is a one-pass diffusion process and does not require any SDS optimization as post-processing. We demonstrate through extensive experiments that, our direct 2.5D generation with the specially-designed fusion scheme can achieve diverse, mode-seeking-free, and high-fidelity 3D content generation in only 10 seconds.
Direct2.5: Diverse Text-to-3D Generation via Multi-view 2.5D Diffusion
[ { "figure_caption": "Normal Conditioned RGB Diffusion ModelGenerate a high-quality mesh of \"a DSLR photo of a pirate collie dog, high resolution\".", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Algorithm 1 :1Multi-view Geometry Optimization Input: Multi-view normal maps Ii and camera parameters πi, where i ∈ {0, 1, 2, 3} Output: M = (V, F ) output triangle mesh Parameters: T : max number of optimization iterations λα, λnc: weights for alpha and normal consistency loss Vocc ← InitOccupancyVolume for i ∈ {0, 1, 2, 3} do Compute alpha mask αi ← thresholding(Ii) Update Vocc ← SpaceCarving(αi, πi) end M ← MarchingCubes(Vocc) M ← MeshSimplification(M ) for iter ← T do Î, α ← DifferentiableRender(M, π) loss ← Ln(I, Î) + λαLα(α, α) + λncLnc(M ) Optimize(loss) M ← Remesh(M ) end", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2. Illustration of explicit geometry optimization. (a) is the generated normal images given a prompt \"a DSLR photo of a pirate collie dog, high resolution\". (b) shows the space carving initialization results mesh in the front and side views. (c), (d), (e) present the intermediate optimization states at 50, 100, 200 steps, separately. As shown, 200 steps are enough to reconstruct the fine details like the skin folds of the dog's face and the thin dog tail.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "AFigure 3 .3Figure 3. A gallery of our text-to-3d generation results. Given text prompts as description input, our method outputs high-quality textured triangle mesh in only 10 seconds. Note that the prompts are not from the training set. Best viewed zoomed in.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Qualitative comparisons. Direct 3D diffusion systems are not well generalized to the complex prompts. SDS-based methods except MVDream are slow and suffered from multi-face and over-saturation problems. MVDream can generate realistic geometry and appearance with fine details but has limited diversity (Fig.5). In contrast, our system can generate realistic 3D models efficiently. Input prompts: 1) a zoomed out DSLR photo of a wizard raccoon casting a spell, 2) a DSLR photo of a turtle standing on its hind legs, wearing a top hat and holding a cane, 3) a DSLR photo of a pirate collie dog, high resolution, and 4) a DSLR photo of a robot tiger.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Stable Diffusion v2.1 base model as our backbone model and fine-tune the latent UNet only for another 50K steps with 1000 warmup steps. Similar to Zero123 [16], we use an image sample size of 256 × 256 for better and faster training convergence. The learning rate is set to 1e -5. We drop the text prompt conditioning with a probability of 15% and apply a noise offset of 0.05. The full training procedure is conducted on 32 NVIDIA A100 80G GPUs (800K steps for the text-to-normal model and 18K steps for the normal-conditioned RGB model, which takes around 80 and 20 hours separately). The batch size is set to 45 on each GPU which leads to a total batch size of 1440.", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "aFigure 5 .5Figure 5. Comparison of sample diversity. Multiple samples are generated from the same prompt with different seeds. Our method is able to generate various samples while MVDream generates extremely similar results due to the SDS's mode-seeking nature.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 1 .1Figure 1. Visualization of more than 200 optimization steps.", "figure_data": "", "figure_id": "fig_7", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2. Demonstration of the iterative updating. (a) is the single-pass generated multi-view RGB images given a prompt \"a freshly baked loaf of sourdough bread on a cutting board\". (b) shows the rendered results of the single-pass generated model. As seen, the top area remains uncolored. (c) shows the generated inpainting mask under the new view, where the white areas denote the areas that are invisible and need to be inpainted. (d) is the inpainted results under the new view given the previously rendered results and the visibility mask. (e) demonstrates the final generated mesh under the top view and two side-top views. The previous uncolored areas now have been inpainted with reasonable and coherent colors.", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Demonstration of Geometry-appearance disentangled generation. Due to the two-stage sequential setting, our method greatly increases the control ability of the content generation results.", "figure_data": "", "figure_id": "fig_9", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Failure case. The normal-based reconstruction system suffered from the depth ambiguity issue. In this example, 4-view reconstruction fails on the television screen and introduces artifacts. Using more views solves this problem.", "figure_data": "", "figure_id": "fig_10", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "aDSLR photo of an ice cream sundae a blue motorcycle a DSLR photo of a toy robot a DSLR photo of a pirate collie dog, high resolution a DSLR photo of a corgi puppy a DSLR photo of a human skull a ceramic lion a zoomed out DSLR photo of a wizard raccoon casting a spell", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. More Diverse Generation Results. Our method avoids the common mode-seeking problem by SDS and generates diverse results.", "figure_data": "", "figure_id": "fig_12", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure6. Results Gallery. Given text prompts as description input, our method outputs high-quality textured triangle mesh in only 10 seconds. The generated multi-view normal and RGB images are shown beside the rendered models. Prompts for the above left column results are R1) a baby bunny sitting on top of a stack of pancakes, R2) a beautiful rainbow fish, R3) a DSLR photo of an astronaut standing on the surface of mars, R4) a steam engine train, high resolution, R5) a DSLR photo of a delicious croissant, and R6) a beautiful dress made out of garbage bags, on a mannequin. Studio lighting, high quality, high resolution. Prompts for the above right column results are R1) a bald eagle carved out of wood, R2) a DSLR photo of a robot tiger, R3) a DSLR photo of a teal moped, R4) a turtle standing on its hind legs, wearing a top hat and holding a cane, R5) a zoomed out DSLR photo of a marble bust of a fox head, and R6) a DSLR photo of a corgi puppy.", "figure_data": "", "figure_id": "fig_13", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Quantitative comparisons with previous methods.", "figure_data": "Methods \\ Metrics IS (↑) CLIP (↑) FID (↓ objv.) FID (↓ COCO) Run TimePoint-E7.2650.220104.105164.765∼ 20 sShap-E7.4120.236103.557163.105∼ 4 sDreamfusion7.7240.245125.873150.285∼ 50 mFantasia3d8.3110.207132.941150.255∼ 115 mProlificDreamer9.4570.269121.577124.185∼ 5 hMVDream8.1800.262117.715133.089∼ 35 mOurs8.1110.26782.324126.014∼ 10 s", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "/ 1.57 0.81 / 1.42 0.82 / 1.11 0.82 / 1.20 0.81 / 1.22 8-view 0.81 / 1.20 0.85 / 1.18 0.85 / 0.99 0.84 / 1.15 0.85 / 1.21 16-view 0.82 / 1.14 0.85 / 0.91 0.86 / 1.06 0.86 / 1.07 0.86 / 1.06 Normal consistency (↑) and Chamfer-Distance (↓) evaluation for fast meshing under view and optimization step numbers.", "figure_data": "views \\ steps501002004006004-view0.78", "figure_id": "tab_3", "figure_label": "1", "figure_type": "table" } ]
Yuanxun Lu; Jingyang Zhang; Shiwei Li; Tian Fang; David Mckinnon; Yanghai Tsin; Long Quan; Xun Cao; Yao Yao
[ { "authors": "Jonathan T Barron; Ben Mildenhall; Matthew Tancik; Peter Hedman; Ricardo Martin-Brualla; Pratul P Srinivasan", "journal": "", "ref_id": "b0", "title": "Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields", "year": "2021" }, { "authors": "Minwoo Byeon; Beomhee Park; Haecheon Kim; Sungjun Lee; Woonhyuk Baek; Saehoon Kim", "journal": "", "ref_id": "b1", "title": "Coyo-700m: Image-text pair dataset", "year": "2022" }, { "authors": "Thomas Angel X Chang; Leonidas Funkhouser; Pat Guibas; Qixing Hanrahan; Zimo Huang; Silvio Li; Manolis Savarese; Shuran Savva; Hao Song; Su", "journal": "", "ref_id": "b2", "title": "Shapenet: An information-rich 3d model repository", "year": "2015" }, { "authors": "Dave Zhenyu; Chen ; Yawar Siddiqui; Hsin-Ying Lee; Sergey Tulyakov; Matthias Nießner", "journal": "", "ref_id": "b3", "title": "Text2tex: Text-driven texture synthesis via diffusion models", "year": "2023" }, { "authors": "Hansheng Chen; Jiatao Gu; Anpei Chen; Wei Tian; Zhuowen Tu; Lingjie Liu; Hao Su", "journal": "", "ref_id": "b4", "title": "Single-stage diffusion nerf: A unified approach to 3d generation and reconstruction", "year": "2023" }, { "authors": "Rui Chen; Yongwei Chen; Ningxin Jiao; Kui Jia", "journal": "", "ref_id": "b5", "title": "Fantasia3d: Disentangling geometry and appearance for high-quality text-to-3d content creation", "year": "2023" }, { "authors": "Matt Deitke; Dustin Schwenk; Jordi Salvador; Luca Weihs; Oscar Michel; Eli Vanderbilt; Ludwig Schmidt; Kiana Ehsani; Aniruddha Kembhavi; Ali Farhadi", "journal": "", "ref_id": "b6", "title": "Objaverse: A universe of annotated 3d objects", "year": "2023" }, { "authors": "Sara Fridovich-Keil; Giacomo Meanti; Frederik Rahbaek Warburg; Benjamin Recht; Angjoo Kanazawa", "journal": "", "ref_id": "b7", "title": "K-planes: Explicit radiance fields in space, time, and appearance", "year": "2023" }, { "authors": "Xiaodong Gu; Zhiwen Fan; Siyu Zhu; Zuozhuo Dai; Feitong Tan; Ping Tan", "journal": "", "ref_id": "b8", "title": "Cascade cost volume for high-resolution multi-view stereo and stereo matching", "year": "2020" }, { "authors": "Anchit Gupta; Wenhan Xiong; Yixin Nie; Ian Jones; Barlas Oguz", "journal": "", "ref_id": "b9", "title": "3dgen: Triplane latent diffusion for textured mesh generation", "year": "2023" }, { "authors": "Martin Heusel; Hubert Ramsauer; Thomas Unterthiner; Bernhard Nessler; Sepp Hochreiter", "journal": "Advances in neural information processing systems", "ref_id": "b10", "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "year": "2017" }, { "authors": "Heewoo Jun; Alex Nichol", "journal": "", "ref_id": "b11", "title": "Shap-e: Generating conditional 3d implicit functions", "year": "2023" }, { "authors": "N Kiriakos; Steven M Kutulakos; Seitz", "journal": "International journal of computer vision", "ref_id": "b12", "title": "A theory of shape by space carving", "year": "2000" }, { "authors": "Samuli Laine; Janne Hellsten; Tero Karras; Yeongho Seol; Jaakko Lehtinen; Timo Aila", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b13", "title": "Modular primitives for high-performance differentiable rendering", "year": "2020" }, { "authors": "Chen-Hsuan Lin; Jun Gao; Luming Tang; Towaki Takikawa; Xiaohui Zeng; Xun Huang; Karsten Kreis; Sanja Fidler; Ming-Yu Liu; Tsung-Yi Lin", "journal": "", "ref_id": "b14", "title": "Magic3d: High-resolution text-to-3d content creation", "year": "2023" }, { "authors": "Ruoshi Liu; Rundi Wu; Basile Van Hoorick; Pavel Tokmakov; Sergey Zakharov; Carl Vondrick", "journal": "", "ref_id": "b15", "title": "Zero-1-to-3: Zero-shot one image to 3d object", "year": "2023" }, { "authors": "Yuan Liu; Cheng Lin; Zijiao Zeng; Xiaoxiao Long; Lingjie Liu; Taku Komura; Wenping Wang", "journal": "", "ref_id": "b16", "title": "Syncdreamer: Generating multiview-consistent images from a single-view image", "year": "2023" }, { "authors": "Zhen Liu; Yao Feng; J Michael; Derek Black; Liam Nowrouzezahrai; Weiyang Paull; Liu", "journal": "", "ref_id": "b17", "title": "Meshdiffusion: Score-based generative 3d mesh modeling", "year": "2023" }, { "authors": "Xiaoxiao Long; Yuan-Chen; Cheng Guo; Yuan Lin; Zhiyang Liu; Lingjie Dou; Yuexin Liu; Song-Hai Ma; Marc Zhang; Christian Habermann; Theobalt", "journal": "", "ref_id": "b18", "title": "Wonder3d: Single image to 3d using cross-domain diffusion", "year": "2023" }, { "authors": "E William; Harvey E Lorensen; Cline", "journal": "ACM SIGGRAPH Computer Graphics", "ref_id": "b19", "title": "Marching cubes: A high resolution 3d surface construction algorithm", "year": "1987" }, { "authors": "Andreas Lugmayr; Martin Danelljan; Andres Romero; Fisher Yu; Radu Timofte; Luc Van Gool", "journal": "", "ref_id": "b20", "title": "Repaint: Inpainting using denoising diffusion probabilistic models", "year": "2022" }, { "authors": "Shitong Luo; Wei Hu", "journal": "", "ref_id": "b21", "title": "Diffusion probabilistic models for 3d point cloud generation", "year": "2021" }, { "authors": "Tiange Luo; Chris Rockwell; Honglak Lee; Justin Johnson", "journal": "", "ref_id": "b22", "title": "Scalable 3d captioning with pretrained models", "year": "2023" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "Communications of the ACM", "ref_id": "b23", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2021" }, { "authors": "Thomas Müller; Alex Evans; Christoph Schied; Alexander Keller", "journal": "ACM Transactions on Graphics (ToG)", "ref_id": "b24", "title": "Instant neural graphics primitives with a multiresolution hash encoding", "year": "2022" }, { "authors": "Alex Nichol; Heewoo Jun; Prafulla Dhariwal; Pamela Mishkin; Mark Chen", "journal": "", "ref_id": "b25", "title": "Point-e: A system for generating 3d point clouds from complex prompts", "year": "2022" }, { "authors": "Werner Palfinger", "journal": "Computer Animation and Virtual Worlds", "ref_id": "b26", "title": "Continuous remeshing for inverse rendering", "year": "2022" }, { "authors": "Ben Poole; Ajay Jain; Jonathan T Barron; Ben Mildenhall", "journal": "", "ref_id": "b27", "title": "Dreamfusion: Text-to-3d using 2d diffusion", "year": "2022" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b28", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Tim Salimans; Ian Goodfellow; Wojciech Zaremba; Vicki Cheung; Alec Radford; Xi Chen", "journal": "Advances in neural information processing systems", "ref_id": "b29", "title": "Improved techniques for training gans", "year": "2016" }, { "authors": "Christoph Schuhmann", "journal": "", "ref_id": "b30", "title": "Improved aesthetic predictor", "year": "2022" }, { "authors": "Christoph Schuhmann; Romain Beaumont; Richard Vencu; Cade Gordon; Ross Wightman; Mehdi Cherti; Theo Coombes; Aarush Katta; Clayton Mullis; Mitchell Wortsman", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b31", "title": "Laion-5b: An open large-scale dataset for training next generation image-text models", "year": "2022" }, { "authors": "Yichun Shi; Peng Wang; Jianglong Ye; Mai Long; Kejie Li; Xiao Yang", "journal": "", "ref_id": "b32", "title": "Mvdream: Multi-view diffusion for 3d generation", "year": "2023" }, { "authors": "Jaehyeok Shim; Changwoo Kang; Kyungdon Joo", "journal": "", "ref_id": "b33", "title": "Diffusion-based signed distance fields for 3d shape generation", "year": "2023" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "", "ref_id": "b34", "title": "Denoising diffusion implicit models", "year": "2020" }, { "authors": "Shitao Tang; Fuyang Zhang; Jiacheng Chen; Peng Wang; Yasutaka Furukawa", "journal": "", "ref_id": "b35", "title": "Mvdiffusion: Enabling holistic multiview image generation with correspondence-aware diffusion", "year": "2023" }, { "authors": "Hung-Yu Tseng; Qinbo Li; Changil Kim; Suhib Alsisan; Jia-Bin Huang; Johannes Kopf", "journal": "", "ref_id": "b36", "title": "Consistent view synthesis with pose-guided diffusion models", "year": "2023" }, { "authors": "Michael Waechter; Nils Moehrle; Michael Goesele", "journal": "Springer", "ref_id": "b37", "title": "Let there be color! -Large-scale texturing of 3D reconstructions", "year": "2014" }, { "authors": "Haochen Wang; Xiaodan Du; Jiahao Li; Raymond A Yeh; Greg Shakhnarovich", "journal": "", "ref_id": "b38", "title": "Score jacobian chaining: Lifting pretrained 2d diffusion models for 3d generation", "year": "2023" }, { "authors": "Jiepeng Wang; Peng Wang; Xiaoxiao Long; Christian Theobalt; Taku Komura; Lingjie Liu; Wenping Wang", "journal": "Springer", "ref_id": "b39", "title": "Neuris: Neural reconstruction of indoor scenes using normal priors", "year": "2022" }, { "authors": "Peng Wang; Lingjie Liu; Yuan Liu; Christian Theobalt; Taku Komura; Wenping Wang", "journal": "", "ref_id": "b40", "title": "Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction", "year": "2021" }, { "authors": "Xintao Wang; Liangbin Xie; Chao Dong; Ying Shan", "journal": "", "ref_id": "b41", "title": "Real-esrgan: Training real-world blind super-resolution with pure synthetic data", "year": "2021" }, { "authors": "Zhengyi Wang; Cheng Lu; Yikai Wang; Fan Bao; Chongxuan Li; Hang Su; Jun Zhu", "journal": "", "ref_id": "b42", "title": "Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation", "year": "2023" }, { "authors": "Yao Yao; Zixin Luo; Shiwei Li; Tian Fang; Long Quan", "journal": "", "ref_id": "b43", "title": "Mvsnet: Depth inference for unstructured multi-view stereo", "year": "2018" }, { "authors": "Yao Yao; Zixin Luo; Shiwei Li; Tianwei Shen; Tian Fang; Long Quan", "journal": "", "ref_id": "b44", "title": "Recurrent mvsnet for high-resolution multi-view stereo depth inference", "year": "2019" }, { "authors": "Xiaohui Zeng; Arash Vahdat; Francis Williams; Zan Gojcic; Or Litany; Sanja Fidler; Karsten Kreis", "journal": "", "ref_id": "b45", "title": "Lion: Latent point diffusion models for 3d shape generation", "year": "2022" }, { "authors": "Jingyang Zhang; Shiwei Li; Zixin Luo; Tian Fang; Yao Yao", "journal": "International Journal of Computer Vision", "ref_id": "b46", "title": "Vis-mvsnet: Visibility-aware multi-view stereo network", "year": "2023" }, { "authors": "Jingyang Zhang; Shiwei Li; Yuanxun Lu; Tian Fang; David Mckinnon; Yanghai Tsin; Long Quan; Yao Yao", "journal": "", "ref_id": "b47", "title": "Jointnet: Extending text-to-image diffusion for dense distribution modeling", "year": "2024" }, { "authors": "Linqi Zhou; Yilun Du; Jiajun Wu", "journal": "", "ref_id": "b48", "title": "3d shape generation and completion through point-voxel diffusion", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 318.51, 509.49, 216.95, 22.15 ], "formula_id": "formula_0", "formula_text": "L = E (X ,c);E∼N (0,1);t xi∈X ;ϵi∈E ∥ϵ i -ϵ θ (x i,t , c, t)∥ 2 2 ." }, { "formula_coordinates": [ 5, 107.65, 533.24, 178.71, 9.65 ], "formula_id": "formula_1", "formula_text": "L V = L n + λ α L α + λ nc L nc ,(2)" }, { "formula_coordinates": [ 5, 90.4, 556.13, 103.87, 14.56 ], "formula_id": "formula_2", "formula_text": "L n = 1 4 4 i ||n i -ni || 1" } ]